Camcorder videocamera recorder


Mixing Cable And Equipment Types



Download 331.41 Kb.
Page3/5
Date13.06.2017
Size331.41 Kb.
#20367
1   2   3   4   5

Mixing Cable And Equipment Types


It is essential that coaxial cables and balanced cables should only be used with the correct type of equipment. Unpredictable results will occur if the incorrect cable type is used. For instance, if the intention is to use a balanced cable, this cannot be connected directly to a coaxial cable or an amplifier designed to drive a coaxial cable. Some form of device is required to be connected between the two cable types so that both cables are correctly matched. This piece of equipment may be an amplifier or video isolation transformer.

Cable Joints


Every joint in a cable produces a small change in the impedance at that point. The mechanical layouts of the conductors change where it is joined. This cannot be avoided. However, the changes in impedance should be minimised by using the correct connectors. When in line joints are being made, ensure the mechanical layout of the joint follows the cable layout as closely as possible. The number of joints in a cable should be minimised, as each joint is a potential source of problems and will produce some reflections in the cable.

The Decibel (dB)


Cable and amplifier performance are usually defined as a certain loss or gain of signal expressed in Decibels (dB). The dB is not a unit of measure but is a way of defining a ratio between two signals. The dB was originally developed to simplify the calculation of the performance of telephone networks, where there were many amplifiers and lengths of cable on a network.

The calculations become extremely difficult, and often produce very large figures using ordinary ratios, when many of them have to be multiplied and divided to work out the signal levels of the network. However these calculations become relatively simple if the ratios are converted to the logarithm of the ratio, which can then be just added and subtracted. This therefore, is the reason for using the decibel, which in simple terms is:

10 x log (ratio)

This dB (power dB) is often used to measure power relative to a fixed level. It is not a measure in its own right. If the impedance at which the measurements are made is constant, the dB becomes 20 x log (ratio). This is the dB (voltage dB) which is normally used to define cable loss or amplifier gain in the CCTV industry.

The advantage of using this method becomes obvious when working out the performance of a network containing more than one or two items. Many people who do not use dBs all the time have problems relating them to real ratios. The key figures to remember are:

If the ratio is 2:1, then 20 x log 2= 20 x .310 = 6.021, e.g. 6dB.

If the ratio is 10:1, then 20 x log 10= 20 x 1 =20, e.g. 20 dB.

If the ratio is 20:1, then 20 x log 20= 20 x 1.3=26, e.g. 26 dB.

Similarly a ratio of 100:1 is equal to 40 dB.

Therefore, put in reverse, some common ratios are:

6 dB is a loss or gain of 2:1

20 dB is a loss or gain of 10:1

26 dB is a loss or gain of 20:1

40 dB is a loss or gain of 100:1

Diagram 5 illustrates the relationship between the measure of signal to noise in dB and as a ratio.

relationship between s/n in db and ratio

Example Of Network Transmission


The following example illustrates a typical network and how to calculate the losses and gains.

typical network

To work out the net loss or gain of signal on a network, add the amplifier gains and subtract the cable losses.

1st cable -- loss 12dB, 1st amplifier -- gain 6dB

2nd cable -- loss 20dB, 2nd amplifier -- gain 26dB

3rd cable -- loss 6dB.

The result would be: -12dB + 6dB - 20dB + 26dB - 6dB = -6dB

i.e.. 1/2 the input signal is present at the end of the 3rd cable. This calculation is much easier than if the original ratios were used: cons84.gif

Reduction Of Signal To Noise Ratio.


When a video signal is amplified the noise, as well as the signal, is increased. If the amplifier were perfect then the resulting signal to noise ratio would remain unchanged. Amplifiers are not perfect and can introduce extra noise into the signal. The amount of noise introduced increases as the amplifier approaches its maximum gain setting. A typical amplifier or repeater operating at maximum gain may reduce the signal to noise ratio by about 3dB. Consequently, it is not advisable to run such equipment at the maximum levels. This is similar to the results of turning the volume up too high on a domestic HI FI. A lot of interference is evident and most units are only operated at up to about half their maximum rating.

In the same way as the net gain or loss in a network can be simply calculated by adding the dB values arithmetically, so can the reduction in signal to noise ratio. In the previous example if the original s/n ratio is 50 dB at the camera then after two amplifiers the s/n ratio could be reduced to 44dB. After four amplifiers this could be reduced to 44 - 12 = 32 dB. At this signal to noise ratio the picture would show a lot of 'snow' and be close to the limit of a usable picture. This then is the limit of the distance that a video signal may be transmitted using this type of transmission. Therefore, besides calculating the losses and gains of the network the reduction in s/n ratio must also be calculated. This example assumes that the worst case is considered. Manufacturers' data or assistance should be sought if equipment is to be used at maximum settings.


Misuse Of The dB


The term dB is very often misused as a measurement, which it is not. This practice is very common. However, the correct way of stating a measurement is +/- YdB's relative to a base level. It is a common, though technically incorrect, practice not to mention the base level, which can lead to the assumption that the dB is a unit of measure.

Examples Of Typical Configuration


Diagram 7 shows some typical configurations for cabled systems.

typical configurations
Cable Performance

Overall cable performance is usually defined for its ability to pass high frequency signals. After selecting the correct type of cable with the desired impedance, the next most important factor is the cable transmission loss at frequencies within the video band. Most cable manufacturers provide figures at 5MHz and 10MHz. The 5MHz figure is the most important for CCTV use. The cable losses will be defined as a loss in dB at 5MHz per 100 metres. Care should be taken when dealing with cables of American origin as these are often defined as loss per 100 feet. Generally, the larger the size and the more expensive the cable, the better will be its performance. This holds true for most cables as larger conductors produce the least loss.

If the loss is given for a frequency but not the one required, the conversion is as follows. Assuming the cable is rated at 3.5 dB loss per 100 metres at 10MHz, then the loss at a frequency of 5MHz would be:



cons86.gif

Note that before using this conversion the cable specification should be checked to ensure that it will transmit satisfactorily at 5 MHz. Some cables are designed specifically for high frequency transmission only, and will not be suitable for the lower frequencies used in CCTV.


Cable Selection

The important factors when selecting a cable for a particular installation are:

1) Establish the type of cable to use, coaxial or twisted pair.

2) Select a range of cables of the correct impedance.

3) Select the correct mechanical format, i.e. normal cable to be laid in ducts or single wire armoured for direct burial etc.

4) Consider the distance the cable is required to run and calculate the length of cable required.

Do not forget to make allowances in this calculation for unseen problems in installing the cable. A minimum of a 10% allowance should always be made. This provides a safety margin to cover inaccurate site drawings, sections of the cable running vertically and other problems likely to be met during installation.

5) When the length of cable has been established, assess the high frequency loss from the cable data.

6) Once the cable loss has been estimated, then the equipment requirement can be established.


Cable Specifications

The data for twisted pair cables is not always easy to obtain. However, most telephone type cables are highly suitable for video transmission. Even the internal telephone subscriber cable can be used over quite long distances for video, with the correct equipment. (Typical losses at 5MHz are 4dB per 100 metres.) If in doubt about the suitability of a twisted pair cable, the general rules are that suitable cables will be unscreened and will have a very slow twist to the conductors, 1 to 3 twists per metre.

Many twisted pair cables are advertised as "Wide Band Data Cables." These are usually of American origin and are heavily screened. They are designed for use with computers and are generally unsuitable for video use. If a cable is to be used about which there is some doubt, it is worth testing the cable with the equipment to be used before installation. Although this may be considered as a waste of time, it can avoid a costly mistake in the installation.

Tests can be run with the cable on drums as the performance will improve when the cable is taken off the drums and installed. When faced with using existing cables on a site, the only safe way to establish if they are suitable is to run an actual test with the equipment it is intended to use.

The problems that can be encountered when attempting to use existing cables include:

Cables that have absorbed water or moisture.

The cable route is much longer than it appears.

Other cables have been connected in parallel.

Bad joints.

If in any doubt, run a transmission test.

Transmission Equipment and Methods

General

When considering the preceding details regarding cable performance, it is obvious that special equipment is required to transmit video signals over long cables. The type of equipment required is dependent on the length of cable involved and the required performance.

This equipment falls under two headings:

1) Launch Equipment

Launch equipment is designed to precondition the video signal for transmission over the cables.

2) Cable Equalising Equipment

Cable equalising amplifiers are designed to provide variable compensation to make up for the losses after the video signal has been transmitted over the cables.


Selection Of Cable And Equipment

When selecting the cable and equipment for a particular installation the following rules apply:

1) Select the cable to be used, noting the high frequency loss associated with the length of the cable selected.

2) Select the line transmission equipment required to compensate for the cable loss.

3) Sometimes it is possible to save on the installation cost by using a cheaper cable with more powerful equipment.

4) Determine the level of performance required.

5) For colour transmission, it is wise to allow a margin of 6dB extra equalisation in the equipment over the projected cable losses.

6) For high quality monochrome transmission no margin is required other than the 10% for variations in cable length mentioned previously.

7) An acceptable monochrome picture can be obtained with a net loss of 6dB over the transmission link.

Example:-

Cable = 1000 Metres of URM70 = Loss of 33dB at 5MHz.

Equipment required for full equalisation = Launch Amplifier with +12dB at 5MHz + Cable equalising amplifier with +32dB of equalising at 5MHz.

This combination of equipment provides a total of +44dB at 5MHz against a cable loss of -33dB giving +11dB at 5MHz in hand.

This configuration will provide a first class colour picture. In fact it would work well up to a cable length of 1200 metres.

Transmission Levels

The normal transmission levels for video signals in the CCTV industry are:

Coaxial Cable:- 1 Volt of composite video, terminated in 75 Ohms, positive going, i.e. Sync tips at 0V and peak white at 1 Volt.

Twisted Pair Cables:- 2.0 Volts balanced, terminated in the characteristic impedance of the cable, normally between 110 and 140 Ohms.

Typical cable losses.

A selection of commonly used cable specifications is given below.

Cable ref.

Type

Impedance

Loss/100Metres

CT125

Coaxial

75W

1.1dB

CT305

Coaxial

75W

0.5dB

CT600

Coaxial

75W

0.3dB

URM70

Coaxial

75W

3.3dB

RG59

Coaxial

75W

2.25dB

TR42/036

Twisted Pair

110W

2.1dB

9207

Twisted Pair

100W

2.3dB

9182

Twisted Pair

150W

2.7dB


Principles Of Transmission


The object of using special transmission amplifiers is to be able to produce a video frequency response that is a mirror image of the cable loss. The net result is that the video output will be a faithful reproduction of the input and effectively the cable loss disappears completely. The above is a much simplified version of what happens in a correctly installed transmission link.

combined frequency response

The example in Diagram 8 shows that the equaliser response is produced by being able to adjust the gain of the amplifier at different frequencies. In this case the amplifier has five sections operating at 1, 2, 3, 4, and 5MHz.


Pre-Emphasis

If the higher frequencies of the video signal are sent at an increased level, this will reduce the high frequency noise by reducing the amount of amplification required at the end of the cable. This method of changing the video signal is known as pre-emphasis.
Cable Equalisation

A cable equalising amplifier acts rather like the audio "Graphic Equaliser" with which most people are familiar. It enables the gain of the amplifier to be adjusted independently at different frequencies within the video band. The object of this is to be able to produce a mirror image of the cable response.

Each amplifier requires setting up to match the cable with which it is to be used. Once set, it should never require readjustment unless a drastic change in the installation is made.


Test Equipment Required

Correct cable equalisation cannot be achieved without the use of special test equipment. This enables the various adjustments to be set to optimum. Some people claim to be able to set up this type of equipment "by eye". No matter how experienced a person is, the results obtained by attempting to use this method will be always inferior to those produced with the proper test equipment.
Pulse And Bar Generator

This produces a special wave form that is designed to show problems in a video transmission link. The timing and period of the chroma burst are especially important in the transmission of colour signals, particularly if multiplexing equipment is incorporated in the system.

typical pulse and bar output wave form
Oscilloscope

This is required to observe the wave form from the pulse and bar generator and should have a bandwidth of at least 10MHz.

Object Of Adjusting The Equipment


The object of setting up the video line transmission equipment is to obtain a true replica of the Pulse and Bar wave form after it has been transmitted through the amplifiers and cable. If this is achieved, a satisfactory picture will be produced by the monitor.

Method Of Adjustment


The pulse and bar generator should be connected in place of the camera. The resultant wave form is viewed on the oscilloscope at the output of the amplifier before the monitor. If a launch amplifier is being used, the output level of this should be set first to 1 Volt with no pre-emphasis. The gain of the cable equalising amplifier should then be set to give 1 Volt output.

wave form showing high frequency losses

The equalising controls should then be adjusted in ascending order, i.e. low frequency (LF) lift first to obtain the best equalisation. Each control affects a different portion of the video signal, to obtain the best results. The controls may need adjusting more than once as there is a certain amount of interaction between them.

Once the controls are set to optimum in the equalising amplifier, the high frequency (HF) lift control in the launch amplifier should then be adjusted to give the required pre-emphasis. The HF lift controls in the equalising amplifier should then be able to be set to a lower level. Care must be taken to ensure that the launch amplifier output is not overloaded as this may produce peculiar results.

Repeater Amplifiers


When a video signal has to be transmitted over extremely long or poor quality cables, it is necessary to use a repeater amplifier within the system. The distance along the cable at which it should be installed can be calculated from the cable loss figures. When using repeater amplifiers, an extra allowance of 3dB should be made for the cable loss. It is better to insert a repeater amplifier in a cable run before the video signal deteriorates too much, than to attempt to equalise a very poor quality signal. There is no actual limit to the length of cable and number of repeater amplifiers that can be used. The problem that occurs is that the signal to noise ratio deteriorates with each amplifier.

The practical limit is approximately 4 repeater amplifiers in cascade with a launch and equalising amplifier at the ends of the cable. This configuration can easily operate over cable lengths of 50 Km or more if the correct type of cable is used. This applies equally to coaxial or balanced cables.


Method Of Adjustment


The method of setting up a system with repeater amplifiers is identical to adjusting a single equalising amplifier. The pulse and bar signals are inserted in the cable at the position of the last repeater amplifier. This enables the final equalising amplifier to be adjusted. When this is completed, the pulse and bar unit is moved up the next section of cable to enable the last repeater to be set up. The procedure is then repeated working along the cable towards the camera position until the launch amplifier is reached. Great care should be taken when setting up a transmission link using repeater amplifiers. This is because once an error has been introduced into the video signal by an incorrectly adjusted amplifier it cannot be corrected by miss-setting another amplifier. Errors are normally additive and a slight mis-setting of several amplifiers will produce unacceptable results.

Earth Currents


When installing TV cameras or other equipment on large sites, the potential of the earth connection provided for the equipment can vary by quite large voltages (up to 50 Volts). This can produce high currents in cables connected between different points on the site and will produce interference on the video signal.

Most video equalising amplifiers have differential inputs that can reject a certain amount of interference due to earth potential variations (up to 10 Volts). However, it is good practice, and a safe precaution, to break the earth connection using a video transformer or opto-coupled equalising amplifier on long cables. It is not safe or legal to remove earth connections from equipment and rely on the earth provided by the video cable.

This latter procedure, which is still common practice in the CCTV industry, is in breach of the electrical safety regulations and is extremely dangerous and should on no account be used

Video Signal Formats



Video Signal Formats

The purpose of this article is to explain the main differences between the various different Video Signal Formats. RGB, Component, S-Video and Composite are terms that are commonly heard, but what do they mean and which one should you use.



First the basics A video signal originates in one of two ways.

  • Optically - From a Camera or Scanner

  • Electronically - From a Graphics card

Irrespective of how it originates, initially it consists of electrical signals that represent the intensities of the three Primary Colours of Light. RED, GREEN, BLUE. Additionaly there are two other timing signals to indicate the start of each frame of the picture VERTICAL SYNC, and each line of the picture HORIZONTAL SYNC.

At its final destination it must recreate the original image by emitting Red, Green and Blue light. A TV displays this on a Cathode Ray Tube (CRT) which emits light when a beam of electrons hits a phosphor coating on the face of the tube. If you look closely at your TV with a magnifying glass you will see one of the two images below.



http://www.kat5.tv/images/dotpitch.jpg

It is how these signals are processed, stored and transmitted to the display that ultimately decides how good or bad the picture will be.

Starting with the best and working down the list, here are the definitions of the different formats and how many cables are required to convey the signal to the distant end. Typically used connectors are also shown although other connector types may also be found on equipment.



RGBHV

5 cables  (5 x BNC connectors or 15 pin High Density D-Type connector)

A PC outputs RGBHV from its VGA connector (I wont mention any digital formats as that is not relevant to this topic). That is the PUREST form of ANALOGUE video you will find as each of the 5 signals is transfered discretely.



RGBS
aka RGB

4 cables  (4 x BNC connectors or SCART)

Discrete colour signals but a COMPOSITE SYNC (S) signal containing H & V pulses. Many items of domestic video equipment that claim to output RGB actually output RGB+CompositeVideo rather than RGB+CompositeSync. For a TV this is no problem but some monitors can get upset if they expect true composite sync pulses.



RGsB
aka SoG
Sync On Green

3 cables  (3 x BNC connectors)

As RGBS except that instead of a separate Sync, the Sync signal is sent on the GREEN colour signal just like Composite Video. This format is used by some Graphics Workstations.



Component
Video
aka Y-Cr-Cb

3 cables  (3 x BNC connectors or 3 x Phono connectors aka RCA Jacks)

A black and white composite video signal containing Luminance LUMA (Y) Brightness information and composite sync. Cr and Cb are two signals containing matrixed colour information to extract the Red/Blue from the picture information in the Y signal. Once the red and blue is removed the only information left is green. This format is used by many DVD players although UK display equipment rarely has inputs for Component Video



S-Video
aka Y-C

2 cables  (4 pin MiniDin connector, SCART or 2 x BNC Connectors)

A black and white composite video signal containing Luminance LUMA (Y) Brightness information and composite sync. CHROMA (C) contains ALL the colour information. S-Video outputs are becoming commonplace on domestic AV equipment and almost all AV amplifiers support the switching of S-Video in addition to Composite Video



Composite
Video

1 cable  (SCART, Phono [usually yellow] or BNC connector)

Almost the lowest of the low. A Composite Video signal contains all of the brightness, colour and timing information for the picture. Because of this there can be noticable artefacts introduced into the picture.

In order for the colour information to be combined with the brightness and timing information it must be encoded. There are three main colour encoding systems in use throughout the world with some of them also having variants.

NTSC

Developed in the USA in the 1950's, this was the first commercial Colour TV system to be launched. Early technical difficulties earned it the nickname "Never The Same Colour"

PAL

The main rival to NTSC, PAL was a European development launched in the 1960's and using lessons learnt from the earlier NTSC system it employed techniques to overcome some of the colour problems suffered by its rival

SECAM

Developed around the same time as PAL, SECAM is the French entry in the TV Standards arena.

All three colour standards are incompatable although many modern TV sets are multi-standard and can display almost any signal.


RF

Can have many signals on 1 cable. (Coaxial Plug or F Connector)

The lowest of the low. Composite Video and usually Audio as well, modulated to a much higher frequency but can enable multiple signals to be distributed over the same cable by choosing different carrier frequencies. This is the method used for mass distribution of TV signals either via Terrestrial Aerial, Cable TV feed or Satellite distribution. If the carrier frequency is not carefully chosen different signals on the same cable can cause harmonic interference to each other causing strange patterning on the screen.



Which signal type is best ?

Without doubt the answer is RGBHV but that doesnt neccesarily mean that is the best one for you to use. For starters... it may not be an option available to you.



Which signal should I use ?

This depends on a number of factors



  1. What OUTPUTS are available from the Video Source

  2. What INPUTS are available on the viewing device

  3. What CABLE do I have available

  4. Do I have to SWITCH the signal en-route to its destination

For domestic use, S-Video would tend be the most commonly supported quality format. Almost all AV amplifiers support S-Video switching whereas very few support any higher formats such as Component or RGB.

TV sets normally only have one input that will support RGB but often have several that support S-Video. DVD Players generally have the widest range of output formats.



Can I change the signal format ?

YES Converters are readily available to convert an RGB signal into S-Video. A well regarded converter is the RGB 2 S-Video from JS Technology. Several KAT5 customers are using these units to convert RGB signals so that they can distribute them around the home as S-Video sent over Low Cost CAT5 cabling.

Can I change the signal to a HIGHER format ?

You CAN... but there is nothing to be gained

ANY form of signal conversion will cause some degradation of the signal. If you attempt to Upconvert a signal it may well look worse after you have finished. Some of the detail has already been lost in the down conversion and the best you can hope for is a signal that looks the same.

As stated at the begining of this article, the display device ultimately has to display an RGBHV signal so the TV or Projector already has circuitry that will do that conversion for you at no cost. It is in a TV manufacturers interest to ensure that this conversion is a flawless as possible and an awful lot of effort goes into the circuit design. Due to the high production volumes for TV's, top quality components become more affordable to the manufacturer. By comparison, an external converter will have much lower sales and will almost certainly be very expensive or of inferior quality.


KAT5 AV Distribution

S-Video is the ideal format to be distributed around the home over CAT5 cable using KAT5 AVS modules. The four pairs of the CAT5 cable carry the Luminance (Y) and Chrominance (C) signals and the Left and Right Audio channels.

This gives a vastly superior picture than that obtained from RF distribution with the added attraction of Stereo Audio. If the source is Dolby Pro-Logic encoded material then surround sound will be available to any TV sets capable of reproducing it.


The Final Decision

The final decision is down to the user. Wherever possible you should use the highest possible standard but take into consideration the source material as well. There is no point using your highest quality input for a Digital TV receiver if the channels you watch have such low bitrates that the picture suffers badly from pixellation. It will just make it more obvious. Save that input for your best source such as DVD.



Television Broadcasting Standards

Broadcast television systems are encoding or formatting standards for the transmission and reception of terrestrial television signals. There are three main analog television systems in current use around the world: NTSC, PAL, and SECAM. These systems have several components, including a set of technical parameters for the broadcasting signal, a encoder system for encoding color, and possibly a system for encoding multichannel television sound (MTS).

In digital television (DTV), all of these elements are combined in a single digital transmission system



Frames

Main article: Film frame

Ignoring color, all television systems work in essentially the same manner. The monochrome image seen by a camera (now, the luminance component of a color image) is divided into horizontal scan lines, some number of which make up a single image or frame. A monochrome image is theoretically continuous, and thus unlimited in horizontal resolution, but to make television practical, a limit had to be placed on the bandwidth of the television signal, which puts an ultimate limit on the horizontal resolution possible. When color was introduced, this limit of necessity became fixed. All current analog television systems are interlaced; alternate rows of the frame are transmitted in sequence, followed by the remaining rows in their sequence. Each half of the frame is called a video field, and the rate at which fields are transmitted is one of the fundamental parameters of a video system. It is related to the utility frequency at which the electricity distribution system operates, to avoid flicker resulting from the beat between the television screen deflection system and nearby mains generated magnetic fields. All digital, or "fixed pixel", displays have progressive scanning and must deinterlace an interlaced source. Use of inexpensive deinterlacing hardware is a typical difference between lower- vs. higher-priced flat panel displays (Plasma display, LCD, etc.).

All films and other filmed material shot at 24 frames per second must be transferred to video frame rates using a telecine in order to prevent severe motion jitter effects. Typically, for 25 frame/s formats (European among other countries with 50 Hz mains supply), the content is PAL speedup, while a technique known as "3:2 pulldown" is used for 30 frame/s formats (North America among other countries with 60 Hz mains supply) to match the film frame rate to the video frame rate without speeding up the play back.

[edit] Viewing technology

Analog television signal standards are designed to be displayed on a cathode ray tube (CRT), and so the physics of these devices necessarily controls the format of the video signal. The image on a CRT is painted by a moving beam of electrons which hits a phosphor coating on the front of the tube. This electron beam is steered by a magnetic field generated by powerful electromagnets close to the source of the electron beam.

In order to reorient this magnetic steering mechanism, a certain amount of time is required due to the inductance of the magnets; the greater the change, the greater the time it takes for the electron beam to settle in the new spot.

For this reason, it is necessary to shut off the electron beam (corresponding to a video signal of zero luminance) during the time it takes to reorient the beam from the end of one line to the beginning of the next (horizontal retrace) and from the bottom of the screen to the top (vertical retrace or vertical blanking interval). The horizontal retrace is accounted for in the time allotted to each scan line, but the vertical retrace is accounted for as phantom lines which are never displayed but which are included in the number of lines per frame defined for each video system. Since the electron beam must be turned off in any case, the result is gaps in the television signal, which can be used to transmit other information, such as test signals or color identification signals.

The temporal gaps translate into a comb-like frequency spectrum for the signal, where the teeth are spaced at line frequency and concentrate most of the energy; the space between the teeth can be used to insert a color subcarrier.

[edit] Hidden signalling

Broadcasters later developed mechanisms to transmit digital information on the phantom lines, used mostly for teletext and closed captioning:



  • PAL-Plus uses a hidden signalling scheme to indicate if it exists, and if so what operational mode it is in.

  • NTSC has been modified by the Advanced Television Standards Committee to support an anti-ghosting signal that is inserted on a non-visible scan line.

  • Teletext uses hidden signalling to transmit information pages.

  • NTSC Closed Captioning signalling uses signalling that is nearly identical to teletext signalling.

  • Widescreen All 625 line systems incorporate pulses on line 23 that flag to the display that a 16:9 widescreen image is being broadcast, though this option is not currently used on analog transmissions.

[edit] Overscan

Main article: Overscan

Television images are unique in that they must incorporate regions of the picture with reasonable-quality content, that will never be seen by some viewers.



[edit] Interlacing

Main article: Interlaced video

In a purely analog system, field order is merely a matter of convention. For digitally recorded material it becomes necessary to rearrange the field order when conversion takes place from one standard to another.



[edit] Image polarity

Another parameter of analog television systems, minor by comparison, is the choice of whether vision modulation is positive or negative. Some of the earliest electronic television systems such as the British 405-line (system A) used positive modulation. It was also used in the two Belgian systems (system C, 625 lines, and System F, 819 lines) and the two French systems (system E, 819 lines, and system L, 625 lines). In positive modulation systems, the maximum luminance value is represented by the maximum carrier power; in negative modulation, the maximum luminance value is represented by zero carrier power. All newer analog video systems use negative modulation with the exception of the French System L.

Impulsive noise, especially from older automotive ignition systems, caused white spots to appear on the screens of television receivers using positive modulation but they could use simple synchronization circuits. Impulsive noise in negative modulation systems appears as dark spots that are less visible, but picture synchronization was seriously degraded when using simple synchronization. The synchronization problem was overcome with the invention of phase-locked synchronization circuits. When these first appeared in Britain in the early 1950s one name used to describe them was "flywheel synchronisation".

Older televisions for positive modulation systems were sometimes equipped with a peak video signal inverter that would turn the white interference spots dark. This was usually user-adjustable with a control on the rear of the television labelled "White Spot Limiter" in Britain or "Antiparasite" in France. If adjusted incorrectly it would turn bright white picture content dark. Most of the positive modulation television systems ceased operation by the mid 1980s. The French System L continued on up to the transition to digital broadcasting. Positive modulation was one of several unique technical features that originally protected the French electronics and broadcasting industry from foreign competition and rendered French TV sets incapable of receiving broadcasts from neighboring countries.

Another advantage of negative modulation is that, since the synchronizing pulses represent maximum carrier power, it is relatively easy to arrange the receiver Automatic Gain Control to only operate during sync pulses and thus get a constant amplitude video signal to drive the rest of the TV set. This was not possible for many years with positive modulation as the peak carrier power varied depending on picture content. Modern digital processing circuits have achieved a similar effect but using the front porch of the video signal.

[edit] Modulation

Given all of these parameters, the result is a mostly-continuous analog signal which can be modulated onto a radio-frequency carrier and transmitted through an antenna. All analog television systems use vestigial sideband modulation, a form of amplitude modulation in which one sideband is partially removed. This reduces the bandwidth of the transmitted signal, enabling narrower channels to be used.



[edit] Audio

In analog television, the analog audio portion of a broadcast is invariably modulated separately from the video. Most commonly, the audio and video are combined at the transmitter before being presented to the antenna, but in some cases separate aural and visual antennas can be used. In all cases where negative video is used, FM is used for the standard monaural audio; systems with positive video use AM sound and intercarrier receiver technology can not be incorporated. Stereo, or more generally multi-channel, audio is encoded using a number of schemes which (except in the French systems) are independent of the video system. The principal systems are NICAM, which uses a digital audio encoding; double-FM (known under a variety of names, notably Zweikanalton, A2 Stereo, West German Stereo, German Stereo or IGR Stereo), in which case each audio channel is separately modulated in FM and added to the broadcast signal; and BTSC (also known as MTS), which multiplexes additional audio channels into the FM audio carrier. All three systems are compatible with monaural FM audio, but only NICAM may be used with the French AM audio systems.



[edit] Evolution

For historical reasons, some countries use a different video system on UHF than they do on the VHF bands. In a few countries, most notably the United Kingdom, television broadcasting on VHF has been entirely shut down. Note that the British 405-line system A, unlike all the other systems, suppressed the upper sideband rather than the lower—befitting its status as the oldest operating television system to survive into the color era (although was never officially broadcast with color encoding). System A was tested with all three color systems, and production equipment was designed and ready to be built; System A might have survived, as NTSC-A, had the British government not decided to harmonize with the rest of Europe on a 625-line video standard, implemented in Britain as PAL-I on UHF only.

The French 819 line system E was a post-war effort to advance France's standing in television technology. Its 819-lines were almost high definition even by today's standards. Like the British system A, it was VHF only and remained black & white until its shutdown in 1984 in France and 1985 in Monaco. It was tested with SECAM in the early stages, but later the decision was made to adopt color in 625-lines. Thus France adopted system L on UHF only and abandoned system E.

In many parts of the world, analog television broadcasting has been shut down completely, or restricted only to low-power relay transmitters; see Digital television transition for a timeline of the analog shutdown.



PC Video

Video File Formats and CODECs

A video codec is a device or software that enables video compression and/or decompression for digital video. The compression usually employs lossy data compression. Historically, video was stored as an analog signal on magnetic tape. Around the time when the compact disc entered the market as a digital-format replacement for analog audio, it became feasible to also begin storing and using video in digital form, and a variety of such technologies began to emerge.

Audio and video call for customized methods of compression. Engineers and mathematicians have tried a number of solutions for tackling this problem.

There is a complex balance between the video quality, the quantity of the data needed to represent it (also known as the bit rate), the complexity of the encoding and decoding algorithms, robustness to data losses and errors, ease of editing, random access, the state of the art of compression algorithm design, end-to-end delay, and a number of other factors.



Applications

Digital video codecs are found in DVD systems (players, recorders), Video CD systems, in emerging satellite and digital terrestrial broadcast systems, various digital devices and software products with video recording and/or playing capability. Online video material is encoded by a variety of codecs, and this has led to the availability of codec packs - a pre-assembled set of commonly used codecs combined with an installer available as a software package for PCs.

Encoding media by the public has seen an upsurge with the availability of CD and DVD-writers.

[edit] Video codec design

Video codecs seek to represent a fundamentally analog data set in a digital format. Because of the design of analog video signals, which represent luma and color information separately, a common first step in image compression in codec design is to represent and store the image in a YCbCr color space. The conversion to YCbCr provides two benefits: first, it improves compressibility by providing decorrelation of the color signals; and second, it separates the luma signal, which is perceptually much more important, from the chroma signal, which is less perceptually important and which can be represented at lower resolution to achieve more efficient data compression. It is common to represent the ratios of information stored in these different channels in the following way Y:Cb:Cr. Refer to the following article for more information about Chroma subsampling.

Different codecs will use different chroma subsampling ratios as appropriate to their compression needs. Video compression schemes for Web and DVD make use of a 4:2:0 color sampling pattern, and the DV standard uses 4:1:1 sampling ratios. Professional video codecs designed to function at much higher bitrates and to record a greater amount of color information for post-production manipulation sample in 3:1:1 (uncommon), 4:2:2 and 4:4:4 ratios. Examples of these codecs include Panasonic's DVCPRO50 and DVCPROHD codecs (4:2:2), and then Sony's HDCAM-SR (4:4:4) or Panasonic's HDD5 (4:2:2). Apple's new Prores HQ 422 codec also samples in 4:2:2 color space. More codecs that sample in 4:4:4 patterns exist as well, but are less common, and tend to be used internally in post-production houses. It is also worth noting that video codecs can operate in RGB space as well. These codecs tend not to sample the red, green, and blue channels in different ratios, since there is less perceptual motivation for doing so—just the blue channel could be undersampled.

Some amount of spatial and temporal downsampling may also be used to reduce the raw data rate before the basic encoding process. The most popular such transform is the 8x8 discrete cosine transform (DCT). Codecs which make use of a wavelet transform are also entering the market, especially in camera workflows which involve dealing with RAW image formatting in motion sequences. The output of the transform is first quantized, then entropy encoding is applied to the quantized values. When a DCT has been used, the coefficients are typically scanned using a zig-zag scan order, and the entropy coding typically combines a number of consecutive zero-valued quantized coefficients with the value of the next non-zero quantized coefficient into a single symbol, and also has special ways of indicating when all of the remaining quantized coefficient values are equal to zero. The entropy coding method typically uses variable-length coding tables. Some encoders can compress the video in a multiple step process called n-pass encoding (e.g. 2-pass), which performs a slower but potentially better quality compression.

The decoding process consists of performing, to the extent possible, an inversion of each stage of the encoding process. The one stage that cannot be exactly inverted is the quantization stage. There, a best-effort approximation of inversion is performed. This part of the process is often called "inverse quantization" or "dequantization", although quantization is an inherently non-invertible process.

This process involves representing the video image as a set of macroblocks. For more information about this critical facet of video codec design, see B-frames.

Video codec designs are often standardized or will be in the future- i.e., specified precisely in a published document. However, only the decoding process needs to be standardized to enable interoperability. The encoding process is typically not specified at all in a standard, and implementers are free to design their encoder however they want, as long as the video can be decoded in the specified manner. For this reason, the quality of the video produced by decoding the results of different encoders that use the same video codec standard can vary dramatically from one encoder implementation to another.

[edit] Commonly used video codecs

Main article: List of codecs

A variety of video compression formats can be implemented on PCs and in consumer electronics equipment. It is therefore possible for multiple codecs to be available in the same product, avoiding the need to choose a single dominant video compression format for compatibility reasons.

Video in most of the publicly documented or standardized video compression formats can be created with multiple encoders made by different people. Many video codecs use common, standard video compression formats, which makes them compatible. For example, video created with a standard MPEG-4 Part 2 codec such as Xvid can be decoded (played back) using any other standard MPEG-4 Part 2 codec such as FFmpeg MPEG-4 or DivX Pro Codec, because they all use the same video format.

Some widely-used software codecs are listed below.



[edit] Lossless codecs

  • FFv1: FFv1's compression factor is comparable to Motion JPEG 2000, but based on quicker algorithms (allows real-time capture). Written by Michael Niedermayer and published as part of FFmpeg under to GNU GPL.

  • Huffyuv: Huffyuv (or HuffYUV) is a very fast, lossless Win32 video codec written by Ben Rudiak-Gould and published under the terms of the GNU GPL as free software, meant to replace uncompressed YCbCr as a video capture format.

  • Lagarith: A more up-to-date fork of Huffyuv is available as Lagarith.

  • YULS

  • x264 has a lossless mode.

[edit] MPEG-4 Part 2 codecs

  • DivX Pro Codec: A proprietary MPEG-4 ASP codec made by DivX, Inc.

  • Xvid: Free/open-source implementation of MPEG-4 ASP, originally based on the OpenDivX project.

  • FFmpeg MPEG-4: Included in the open-source libavcodec codec library, which is used by default for decoding and/or encoding in many open-source video players, frameworks, editors and encoding tools such as MPlayer, VLC, ffdshow or GStreamer. Compatible with other standard MPEG-4 codecs like Xvid or DivX Pro Codec.

  • 3ivx: A commercial MPEG-4 codec created by 3ivx Technologies.

[edit] H.264/MPEG-4 AVC codecs

  • x264: A GPL-licensed implementation of the H.264 video standard. x264 is only an encoder.

  • Nero Digital: Commercial MPEG-4 ASP and AVC codecs developed by Nero AG.

  • QuickTime H.264: H.264 implementation released by Apple.

  • DivX Pro Codec: An H.264 decoder and encoder was added in version 7.

[edit] Microsoft codecs

  • WMV (Windows Media Video): Microsoft's family of proprietary video codec designs including WMV 7, WMV 8, and WMV 9. The latest generation of WMV is standardized by SMPTE as the VC-1 standard.

  • MS MPEG-4v3: A proprietary and not MPEG-4 compliant video codec created by Microsoft. Released as a part of Windows Media Tools 4. A hacked version of Microsoft's MPEG-4v3 codec became known as DivX ;-).

[edit] On2 codecs

  • VP6, VP6-E, VP6-S, VP7, VP8: Proprietary high definition video compression formats and codecs developed by On2 Technologies used in platforms such as Adobe Flash Player 8 and above, Adobe Flash Lite, Java FX and other mobile and desktop video platforms. Supports resolution up to 720p and 1080p. VP8 has been made open source by Google under the name libvpx or VP8 codec library.

  • libtheora: A reference implementation of the Theora video compression format developed by the Xiph.org Foundation, based upon On2 Technologies' VP3 codec, and christened by On2 as the successor in VP3's lineage. Theora is targeted at competing with MPEG-4 video and similar lower-bitrate video compression schemes.

[edit] Other codecs

  • Schrödinger and dirac-research: implementations of the Dirac compression format developed by BBC Research at the BBC. Dirac provides video compression from web video up to ultra HD and beyond.

  • DNxHD codec: a lossy high-definition video production codec developed by Avid Technology. It is an implementation of VC-3.

  • Sorenson 3: A video compression format and codec that is popularly used by Apple's QuickTime, sharing many features with H.264. Many movie trailers found on the web use this compression format.

  • Sorenson Spark: A codec and compression format that was licensed to Macromedia for use in its Flash Video starting with Flash Player 6. It is considered as an incomplete implementation of the H.263 standard.

  • RealVideo: Developed by RealNetworks. A popular compression format and codec technology a few years ago, now fading in importance for a variety of reasons.

  • Cinepak: A very early codec used by Apple's QuickTime.

  • Indeo, an older video compression format and codec initially developed by Intel.

All of the codecs above have their qualities and drawbacks. Comparisons are frequently published. The trade-off between compression power, speed, and fidelity (including artifacts) is usually considered the most important figure of technical merit.

[edit] Missing codecs and video-file issues

A common problem, when an end user wants to watch a video stream encoded with a specific codec, is that if the exact codec is not present and properly installed on the user's machine, the video won't play (or won't play optimally).



MPlayer or VLC media player contain many popular codecs in a portable standalone library, available for many operating systems, including Windows, Linux, and Mac OS X. This also resolves many issues within Windows in conflicting and poorly installed codecs

Video Editing Software

Open source software

[edit] Non-linear video editing software

See also: List of free and open source software packages#Video editing

These software applications allow non-linear editing of videos:



  • Avidemux (cross-platform)

  • AviSynth (Windows)

  • Blender VSE (cross-platform)

  • CineFX Formerly known as: Jashaka (introduced as "Jahshaka Reinvented") (Cross platform)

  • Cinelerra (GNU/Linux)

  • Vizrt (Viz Easy Cut)

  • Ingex (GNU/Linux)

  • Kdenlive (GNU/Linux, Mac OS X, FreeBSD)

  • Kino (GNU/Linux)

  • LiVES (GNU/Linux, BSD, IRIX, Mac OS X, Darwin)

  • Lightworks (Windows, Mac OS X and Linux versions will be released in late 2011)

  • Lumiera (GNU/Linux)

  • Open Movie Editor (GNU/Linux)

  • OpenShot Video Editor (GNU/Linux)

  • PiTiVi (GNU/Linux)

  • VLMC VideoLan Movie Creator (GNU/Linux, Mac OS X, Windows)

[edit] Video encoding and conversion tools

  • FFmpeg

  • Format Factory

  • HandBrake

  • Ingex (GNU/Linux)

  • MEncoder

  • MPEG Streamclip

  • Nandub

  • ppmtompeg MPEG-1 encoder, part of netpbm package.

  • RAD Game Tools Bink and Smacker

  • Thoggen (GNU/Linux)

  • VirtualDub (Windows)

  • VirtualDubMod (Windows) (based on VirtualDub, but with additional input/output formats)

  • VLC Media Player (Microsoft Windows, Mac OS X, GNU/Linux)

  • WinFF GUI Video Converter (Linux, Windows)

[edit] Proprietary software

[edit] Non-linear video editing software

  • Adobe Systems

    • Premiere Elements (Mac OS X, Windows)

    • Premiere Pro (Mac OS X, Windows)

    • Encore (Mac OS X, Windows)

    • After Effects (Mac OS X, Windows)

    • Adobe Premiere Express (Adobe Flash Player)

  • Apple Inc.

    • Final Cut Express (Mac OS X)

    • Final Cut Pro (Mac OS X)

    • iMovie (Mac OS X)

  • ArcSoft ShowBiz (discontinued)

  • AVS Video Editor (Windows)

  • Autodesk Autodesk Smoke (Mac OS X)

  • Avid Technology

    • Avid DS (Windows)

    • Media Composer (Windows, Mac OS X)

    • Avid NewsCutter

    • Avid Symphony (Windows, Mac OS X)

    • Avid Studio (Windows)

    • Xpress Pro (discontinued)

    • Avid Liquid (discontinued)

  • Corel (formerly Ulead Systems)

    • VideoStudio (Windows)

    • MediaStudio Pro (discontinued)

  • CyberLink PowerDirector (Windows)

  • Edius from Thomson Grass Valley, formerly Canopus Corporation (Windows)

  • Elecard AVC HD Editor

  • EVS Broadcast Equipment

    • Xedio CleanEdit (Windows)

  • FORscene (Java on Mac OS X, Windows, Linux)

  • FXhome Limited (HitFilm) (Windows)

  • Lightworks (Windows, planned Mac OS X and Linux versions for late 2011)

  • Magix

    • Video easy

    • Movie Edit Pro

    • Video Pro X

  • Media 100

    • HD Suite (Mac OS X)

    • HDe (Mac OS X)

    • SDe (Mac OS X)

    • Producer (Mac OS X)

    • Producer Suite (Mac OS X)

  • Montage Extreme (Windows)

  • muvee Technologies

    • muvee Reveal 8.0 (Windows)

    • muvee autoProducer 6.0 (Windows)

  • NCH Videopad (Windows)

  • Nero Vision (Windows)

  • NewTek

    • Video Toaster (Windows, hardware suite)

  • Pinnacle Studio (Windows)

  • Quantel

    • iQ (Windows)

    • eQ (Windows)

    • sQ (Windows)

    • Newsbox (Windows)

  • Roxio

    • Creator and MyDVD (Windows)

    • Toast (Mac)

  • Serif MoviePlus (Windows)

  • SGO Mistika (Linux)

  • Sony Creative Software

    • Sony Vegas Movie Studio (Windows)

    • Sony Vegas Pro (Windows)

  • Windows Movie Maker (Windows)

  • Windows Live Movie Maker (Windows)

  • Womble Multimedia

    • MPEG Video Wizard DVD(Windows)

    • MPEG Video Wizard (Windows)

    • MPEG-VCR (Windows)

  • Clesh (Java on Mac OS X, Windows, Linux)

[edit] Video encoding and conversion tools

  • MPEG Video Wizard DVD (Windows)

  • Cinema Craft Encoder (MS Windows)

  • Apple Compressor (Mac OS X)

  • iCR from Snell & Wilcox (Windows)

  • On2 Flix (Mac OS X, Windows)

  • ProCoder from Thomson Grass Valley, formerly Canopus Corporation (MS Windows)

  • Apple QuickTime Pro (Mac OS X, Windows)

  • Roxio Easy Media Creator

  • Sorenson Squeeze

  • Telestream Episode (Mac OS X, Windows)

  • TMPGEnc (Windows)

  • Elecard Converter Studio line

[edit] Freeware (free proprietary software)

[edit] Non-linear video editing software

  • Pinnacle VideoSpin (Windows)

[edit] Video encoding and conversion tools

  • FormatFactory (Windows)

  • Ingest Machine DV (Windows)

  • MediaCoder

  • MPEG Streamclip (Windows, Mac OS X)

  • SUPER (Windows) Frontend for ffmpeg, Mencoder and a few other encoders. Contains DirectShow optimizations as well.

  • ZConvert (Windows)

  • TMPGEnc Commercial Version (Windows)

  • Windows Media Encoder (Windows)

  • XMedia Recode

[edit] Online software

[edit] Video encoding and conversion tools

  • Zamzar

  • Zencoder

[edit] Media management and online video editing

  • Kaltura

  • Plumi

Animation: Types of Animation

Animation is the rapid display of a sequence of images of 2-D or 3-D artwork or model positions in order to create an illusion of movement. The effect is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in several ways. The most common method of presenting animation is as a motion picture or video program, although there are other methods.


Download 331.41 Kb.

Share with your friends:
1   2   3   4   5




The database is protected by copyright ©ininet.org 2024
send message

    Main page