Digital Television Basics: Tutorial



Download 96.85 Kb.
Page1/2
Date28.01.2017
Size96.85 Kb.
#9346
  1   2
Digital Television Basics: Tutorial

http://www.dtg.org.uk/reference/tutorial.html

Active Format Descriptor

Conditional Access (CA)

Content Decoder

Copy Protection

The DVB Project

Local Loop

MPEG Encoding

Multimedia Home Platform (MHP)

Near Video on Demand

Security

Service Information (SI)

Transmission

Web Browsers
Active Format Descriptor

There are now more than 250,000 widescreen televisions in the UK and the overwhelming reaction is that it greatly enhances the viewing experience. Going back to 4 x 3 is like returning to black and white ... viewers wonder why they put up with 4 x 3 for so long. Their main complaint is the lack of widescreen transmissions. Herein lies a 'chicken and egg' problem; broadcasters maintain they cannot transmit more whilst there are so many 4x3 television sets still in use and viewers say they will not buy a widescreen television set until there is more widescreen programming!


Enter digital terrestrial (DTT). The new services require new equipment and broadcasters can broadcast much higher quantities of widescreen without disadvantaging existing viewers. The Broadcasters see widescreen as a sales feature of DTT and manufacturers have been quick to respond with advertising campaigns encouraging customers to purchase a widescreen television set as a sensible preparation for digital.One of the complications for broadcasters is that their archives are full of 4x3 programmes. Only in the last couple of years have most broadcasters been shooting some programming in widescreen 16x9 format, so there will continue to be some 4x3 'letterbox' programmes on digital, particularly on themed repeat channels. Therefore, manufacturers will continue to provide a variety of 'zoom' functions on digital television sets. The Active Formal Descriptor (AFD) is a signal that broadcasters will transmit with the picture to enable television sets to display the picture to best effect.What is best effect depends on the viewers display (4x3 or 16x9) and it depends on the viewer's preferences. Most viewers like to 'see a screenful' but many prefer to set their televisions so that not too much picture is cropped. Some like to see the whole picture, even if that means black bands at the sides of the screen. In general, widescreen televisions offer five options when transmission is not full 16x9 widescreen:
4 by 3

14 by 9


Zoom (16 by 9)

Smart/Panorama

Auto
The latter three modes need some explanation. Zoom will keep the correct shape but crops the top and bottom of a 4x3 picture quite considerably. Smart / panorama avoids the cropping by stretching the picture horizontally to fill the screen but makes everyone look as though they need to go on a diet. Neither of these modes is entirely satisfactory. What we need is a signal from the broadcaster to say, for example, 'This picture can be zoomed in and cropped as far as 14x9 (because we have shot it with this compromise in mind)' or 'this picture is actually letterbox so, even if you normally prefer to watch in 4x3, you can zoom it in to 14x9.' This is where the AFD come in. It is a way of giving the tv set more information so that it can adjust the zoom for best effect taking account of viewers preferences.
Lastly, when a viewer records a programme on an existing analogue VCR, the digital bitsream flag which indicates a 16x9 programme is lost. Therefore, the DTG is recommending that manufacturers generate a 'line 23' WSS signal on the output feed to the VCR. Without it, widescreen programmes will appear distorted. Use of the AFD in the WSS generation gives the viewer the same fine control of zoom when viewing recordings as when watching off-air transmissions.

Conditional Access (CA)

Introduction

A conditional access system comprises a combination of scrambling and encryption to prevent unauthorised reception. Encryption is the process of protecting the secret keys that are transmitted with a scrambled signal to enable the descrambler to work. The scrambler key, called the control word must, of course, be sent to the receiver in encrypted form as an entitlement control message (ECM). The CA subsystem in the receiver will decrypt the control word only when authorised to do so; that authority is sent to the receiver in the form of an entitlement management message (EMM). This layered approach is fundamental to all proprietry CA systems in use today.
The system block schematic is shown below:

The control word is changed at intervals of 10 seconds, typically. The ECM, sometimes called the multi-session key, is changed at perhaps monthly intervals to avoid hackers gaining ground.


The Development of Standards

Way back in 1988, an attempt was made by France Telecom and others to develop a standard encryption system for europe. The result was Eurocrypt. Unfortunately, in its early manifestations it was not particularly secure and multiplex operators went their own way. Thus, in 1992 when the DVB started their consideration of CA systems, they recognised that the time had passed when a single standard could realistically be agreed and settled for the still difficult task of seeking a common framework within which different systems could exist and compete.


They therefore defined an interface structure, the Common Interface, which would allow the set top box to receive signals from several service providers operating different CA systems. The common interface module contains the CA system, rather than the STB itself, if necessary allowing multiple modules to be plugged into a single STB. However, there were serious objections to the common interface from many CA suppliers on the grounds that the extra cost would be unacceptable so the DVB stopped short of mandating the Common Interface, instead recommending it, along with simulcrypt. The Common Interface was endorsed by CENELEC in May 1996 and the DTG unanimously adopted its use for digital terrestrial transmission in the UK at its meeting on 13th May 1996.
Since then the European Commission has required the use of a common interface mechanism for all integrated tv sets (excluding STBs which may employ embedded CA systems) and this is likely to be the eventual outcome - an embedded CA system in subsidised STBs and Common Interface slots in all other devices. It should be noted that the Common Interface connector allows plug-in cards for other functions besides CA; for example it is proposed to provide audio description for the visually impaired using a common interface card.
Simulcrypt allows two CA systems to work side by side, transmitting separate entitlement messages to two separate types of STU, with different CA systems. It also gives the multiplex provider the opportunity to increase his viewer base by cooperating with other multiplex operators. Technical simulcrypt is the same thing but within a single multiplex, thus giving the multiplex operator some leverage with the CA suppliers.
The simulcrypt system is shown diagramatically below. Note that it requires cooperation between CA suppliers - something which does not come naturally!

If a viewer wishes to receive services from different providers who do not simulcrypt each other's ECMs, the only option is to acquire separate decryption for each CA system. The Common Interface enables a multicrypt environment, allowing an additional CA system to be added as a module. This is not quite the panacea it seems, since it still requires the CA vendor to develop the module, something he is unlikely to be keen on if his best customer doesn't approve. In practice, the possibility of multicrypt encourages the parties to conclude a simulcrypt agreement.



Content Decoder

Introduction

The Content Decoder is a UK term for the simplified MHEG-5 API that will deliver information services to the digital terrestrial viewer. It defines how the tv picture may be displayed with added textual information, either as an overlay, as an information screen with picture insert and how the display may be used for data services entirely. The API takes its commands from the MMI (Man Machine Interface - ie the remote control to you and me!) and provides the software to allow the viewer to select or 'navigate' between available services.
Object Carousel

A fundamental source for the content decoder is the transmission of data in the form of object carousels. When the viewer changes channels, it is necessary to update the receiver with data information pertinant to the new selection. Thus data is transmitted repeatedly in cycles or 'carousels', as they are called. The system adopted by UK DTT is the DAVIC DSM-CC object carousel (Data Storage Media Command and Control), which defines the transmission of data objects in 4096 byte sections.


The objects may be information for display on the screen, or they may be applications, software modules which carry out functions within the television receiver. These applications are stored in the receiver memory and the MHEG engine has to manage their storage and release within the receiver's memory constraints.
Writing to the Screen

An essential function of the content decoder is the assembling of objects to be displayed in the screen memory. In a computer, the variety of screen resolutions and user-controlled windows mean that the positioning of text and the overall appearance is indeterminate. The API for television is fundamentally different in that the elements of the scene have to be positioned precisely. In other ways, though, the mixing of text, still and moving pictures has close parallels with the computer, though the restricted memory and lack of disc storage mean that only a very few data types are specified.


Text and Hypertext

The UK content decoder specifies a special resident font, Tiresias, which has been designed in conjunction with the British Royal National Institute for the Blind, with the objective of achieving maximum legibility on the screen for all viewers, including the visually impaired. The font is defined in four sizes, heading, subtitle, body and footnote, and in plain and bold styles only. Both text and graphics make use of a 256 colour palette, though the requirements of anti-aliasing reduce the numbers that may be displayed simultaneously.


Hypertext links are defined, allowing the user to select a screen 'hotspot' and branch to new text, change channels, or carry out other application defined functions.
The Relationship to HTML

Hypertext Markup Language (HTML) is universally used on the internet and the massive volumes of expensively crafted content dictate that it should be possible to reuse previously created HTML over the television API. However, computer users will know all too well that HTML is evolving rapidly and does not have the required stability for television usage. MHEG-5 defines markup coding for text and hypertext objects which are functionally equivalent to a basic subset of HTML but coded differently to improve transmission efficiency. A simple translation process in the MHEG editor allows HTML files to be input to the information service.



Copy Protection

The unauthorised copying of pre-recorded video tapes by domestic users is a big problem. Nearly one-in-three US video households has two VCRs, which may be used for back-to-back copying -linking together two video recorders with copying cables and then dubbing from one machine to the other. A survey of 1,000 VCR households in the US found that one illegal copy is made for every four videos sold. More than 30 per cent of American VCR households have unauthorised copies of pre-recorded videos in their libraries.


The big name in the anti-copying market is California based Macrovision, whose eponymous system has become the de facto standard in the video industry. Since 1986, its system has been used to protect 2billion video cassettes from back-to-back copying, and each year, a further 400million video tapes are encoded with the system. In its original analogue form, the Macrovision system adds a series of electronic pulses to the vertical blanking interval (VBI). When a Macrovision-encoded video tape is played in a VCR, the resulting picture looks normal. But if the video tape is copied, the Macrovision pulses 'fool' the copying video recorder into reacting as if the video signal was much stronger than it actually is. The VCR compensates for this by recording a weak video signal on to the tape, which plays havoc with a television's playback and synchronisation circuits. The resulting picture may roll on-screen, lose colour or suffer from flashing.
Macrovision says its system is 90 per cent effective against known television and video combinations. Users of the Macrovision system include the Hollywood film studios - in the UK, for example, 60 per cent of sell-thro videos and 72 per cent of all rental video cassettes are Macrovision protected.
With the coming of digital, the need to prohibit illegal copying becomes even more pressing. Major feature films are shown on pay-per-view and subscription channels much earlier than they appear on free-to-air television and the lack degredations of analogue transmission video pirating could thrive. With the prospect of digital recorders around the corner, West Coast film distributors are requiring copy protection as a condition of transmission contracts.

The DVB Project

The DVB project was set up in 1993 and came from a market led perception that digital broadcasting to the home needed technical standards of transmission to avoid the anarchy of proprietry boxes which has developed in analogue satellite transmission. However, the DVB is not itself a standards making body; it provides a forum for suppliers to agree specifications which are then passed to existing standards making bodies (ETSI, ISO) for ratification. DVB is market led, so that so called 'commercial modules' pass requirements to 'technical modules' and not the other way around.


An early decision was not to 'reinvent the wheel.' So MPEG2 was readily adopted. But the use of MPEG2 alone will not make services interoperable; modulation, multiplexing, service information and conditional access, all needed defining. It was felt fundamental that all methods of distribution should be considered. Initially, the group considered itself to be European, but the interest in adopting its specifications has meant that the work has become truly international and "European" has been dropped from the title.
The first achievement in December 1993 was a specification for digital satellite broadcasting using QPSK modulation. This was closely followed by specification for digital cable transmission using 64QAM in January 1994. Further specifications for service information, a common scrambling system, a code of conduct for Conditional access suppliers have followed. The group recognised that interactive data services were predicted to grow to over $40billion p.a. by the millenium and this need has been reflected in all the specifications.
In terms of digital terrestrial broadcasting, the DVB opted to support OFDM, rather than single carrier, because of the very dense relay population in Europe. Also it was felt that high definition was not the strong driver that it had been in the USA. However, the scheme does allow for a future upgrade to include HDTV. The group considered the number of carriers in the OFDM transmission and opted to make this a user specified parameter, allowing for an early implementation at 2,000 carriers, with the possibility to increase to 8,000 when technology and commercial considerations allowed. The requirement for flexibility in frequency planning and the possibility to trade data-rate against coverage, was built in by requiring different forms of modulation (QPSK, 16QAM and 64QAM) and different code rates to be implemented in the receiver. The DVB spec was agreed in December 1995 and achieved ETSI approval in April 1996.
Although, initially, terrestrial transmission was primarily led by the UK, other European countries have shown considerable interest and a European-wide interest group, DIGITAG, was established at the 1996 IBC Conference to develop and harmonise digital terrestrial television internationally.

Local Loop

This telephony term describes the connection between the subscriber and the exchange. For most of the past 100 years, the connection has been copper twisted pair buried in the pavement or distributed overhead on poles. The investment in enormous. Now, however, convergence between voice, computer and television applications means the demarcations are being eroded and there is increasing competition for the delivery of services.


The alternative systems, their applications and limitations are listed below:
Copper

For most of the history of fixed line telephony, the bandwidth that copper provided was some 3kHz, limited by analogue techniques and designed to be the cheapest solution that the telecomms operator could get away with. However, the twisted pair is inherently capable of much higher bandwidths and over short distances can carry video or broadband data. The existing infrastructure is now being used to carry data at 56kb/sec and, using ISDN (Integrated Services Digital Network) can be extended to 128kb/sec. ISDN is now seen as an interim technology and there is massive development being devoted to Digital Subscriber Line (DSL) technology, which holds the promise of increasing the bandwidth of the copper local loop to several megabits/sec., bringing the prospect of delivering video on demand (VOD) services.


Over the years, a number of trials have been carried out, in Ipswich, Cambridge and, most recently, North London but it is only now that DSL technology is becoming viable. BT are frustrating the industry by their unwillingness to set a timetable for the rollout of DSL technology but it stems from the simple fact that the longer they wait, the cheaper it will be to install.
DSL is not a solution without problems. It comes in a number of variants collectively known of as xDSL. ADSL (Asynchronous DSL) is the most likely to see largescale implementation, giving greater bandwidth to the forward path to the home and less to the return path. The bandwidth of the forward path is limited by crosstalk at longer distances and one of the most difficult compromises for BT is the chosen datarate and the corresponding percentage of existing customers who will be served without the need for curbside repeaters. With a data bandwidth of 4Mb/sec., maybe one third of existing voice subscribers will not be able to receive without expensive improvements to the local loop.
Cable

Over the last ten years, new cable companies have invested massively in alternative connections to the home. There are some different technologies in use but the majority of these have fibre optic cable to the curbside cabinet and coax from there to the home. These are clearly capable of much higher bandwidths but also have limitations. In most cases the network was installed to deliver television to the home and was designed on the basis of broadcast TV services. Although the fibre to the cabinet is broadband, it may have limitations on the number of VOD services it can carry simultaneously, which may require the number of homes served by each cabinet to be reduced in the future if VOD becomes prevalent. ADSL, being grown on a telephony foundation, does not have the same problem.


Wireless Technologies

It is worth bearing in mind that it is possible that other technologies may become competitive in some circumstances. MMDS (Multipoint Microwave Distribution System) could provide a most effective local loop, particularly in suburban and less densely populated areas. The technology is maturing and is being successfully trialed but implementation awaits, amongst other things, Radio Agency licencing of the appropriate spectrum at 40GHz. GSM mobile phone systems could provide an effective slow speed (9.6kb/s) return path and the next generation, UMTS (Universal Mobile Telecommunications System) offers the promise of data transmission at speeds of up to 2Mb/sec. In practice, however, its use is likely to be limited by capacity.


Whilst the use of GSM for return path applications has the significant advantage of 'piggy-backing' on an extremely successful system, it must be remembered that there are specific FWA (fixed wireless access) systems which may have application in some situations. In developing countries, areas without an existing copper infrastructure may prove suitable for FWA installations in cells of 5km radius.
For television return path applications, it is also possible that a return path to the satellite or terrestrial transmitter could be established via the receive antenna. The Irish DTT rollout is planned on the basis of implementing interactive applications using an INTERACT system.

MPEG Encoding

MPEG stands for Moving Pictures Expert Group, the committee of industry which created the standard. MPEG is, in fact, a whole family of standards for digital video and audio signals using DCT compression. There are other ways of compressing signals, such as wavelet and fractal compression, but none has yet achieved such worldwide support that DCT has, and MPEG-2, which employs DCT compression, is certain to become the dominant standard in consumer equipment for the foreseeable future. MPEG takes the DCT compression algorithm and defines how it is used to reduce the data rate, how packets of video and audio data are multiplexed together in a way that will be understood by an MPEG decoder.


DCT or Discrete Cosine Transform, to give it its full name, uses the fact that adjacent pixels in a picture (either physically close in the image (spatial) or in succesive images (temporal)), may be the same value. Small blocks of 8 x 8 pixels are 'transformed' mathmatically in a way that tends to group the common digital signal elements in a block together. DCT doesnt directly reduce the data but the transform tends to concentrate the energy into the first few coefficients and many of the higher frequency coefficients are often close to zero. Bit rate reduction is achieved by not transmitting the higher frequency elements, which have a high probability of not carrying useful information. (That is why, when things start to fail, the picture dissolves into little blocks.)
MPEG has pedigree in two developments taking place in the late 1980's, an ITU standard for video conferencing and video telephony (H.261) and JPEG - a joint working group between ISO/IEC and CCITT that chose a DCT-based algorithm for still picture coding in a competetive process early in 1988. MPEG was started in 1988 as a working group within ISO/IEC with the aim of defining standards for digital compression of motion video and audio signals.
MPEG first aim was to define a video coding algorithm for application on 'digital storage media', in particular for CD-ROM. Very rapidly the need for audio coding was added and the scope was extended from being targeted solely on CD-ROM to trying to define a 'generic' algorithm capable of being used by virtually all applications, from storage-based multimedia systems, television broadcasting, and communications applications such as VoD and videophones.
MPEG's first project, MPEG- 1, was published in 1993 as a three part standard defining audio and video compression coding methods and a multiplexing system for interleaving audio and video data so that it can be played back in close synchronisation. It has been applied in the CDi system and Video-CD for publishing full-screen motion video on CD-ROM. A number of PC-based decoder systems are now available. It also forms the basis of a number of field trials for VoD services. It principally supports video coding at bit-rates up to about 1.5 Mbit/s giving quality similar to VHS, and virtually transparent stereo audio quality at 192 kbits/sec. and is optimised for a non-interlaced video signals. MPEG-1 assumes progressive scanning - the alternate fields of interlace-scanned pictures are dropped to achieve this.
During 1990, MPEG recognized the need for a second, related standard for coding video at higher datarates and in an interlaced format. The MPEG-2 standard is capable of coding standard definition television at bit-rates from about 1.5Mb/s to some 15 Mbit/s. MPEG-2 also adds the option of multi-channel surround sound coding. MPEG-2 is backwards compatible with MPEG-1 (ie MPEG2 decoders will decode MPEG1 pictures and sound). It is interesting to note that, for video signals coded at bitrates below about 3 Mbit/s, MPEG-1 may be more efficient than MPEG-2.
Both the MPEG-1 and MPEG-2 standards are split into three main parts: Audio coding, video coding, and system management and multiplexing. MPEG itself is split into three main sub-groups, one responsible for each part, and a number of other sub-groups to advise on implementation matters, to perform subjective tests, and to study the requirements that must be supported.
Each of the sub-groups has followed a similar procedure. Initially the requirements that the system must support were analysed, This lead to a statement of the problem and a call for proposals. That started a competitive phase during which many proposals from different laboratories were put through tests aimed at identifying promising algorithmic techniques that could be used in the collaborative phase that followed. The competitive phase lasted about 1 year until the collaborative phase took over after the evaluation of the results from a large series of subjective tests of all the different proposals based on the same input material and experimental conditions. During the collaborative phase a draft specification was produced and successively refined. At the end of this stage formal approval of the standard within ISO/IEC and ITU-T takes about 1 year to achieve during which the quality of the specification can be incrementally improved.
Work on MPEG-2 began in the summer of 1990. The Main Profile algorithm of the Video part of the standard was frozen in March 1993 so that no further changes would take place, and a draft of the whole audio, video and systems specifications were completed in November 1993. The ISO/IEC approval process of balloting, revision and approval was completed in November 1994. The final text was published during 1995 and early implementations of the standard are now (Mar '96 beginning to appear in consumer products.
MPEG aims to be a generic video coding system that supports different applications that have different requirements. It is not possible to provide a single, unique method for all the different problems. Instead MPEG has followed a 'tool-kit' approach in which an extensive get of algorithmic 'tools' are defined. For instance coding modes are provided both for scalable and non-scalable coding systems. The coding syntax that MPEG has defined provides tools to cover different applications, and parameters can be chosen to allow working at different bit-rates, picture sizes and resolutions etc.
It is neither cost effective nor an efficient use of bandwidth to support all the features of the standard in all applications. In order to make the standard practically useful and enforce interoperability between different implementations of the standard, MPEG has defined profiles and levels of the full standard. Roughly speaking, a profile is a sub-set, suitable for a particular application, of the full possible range of algorithmic tools, and a level is a defined range of parameter values (such as picture size for instance) that are reasonable to implement and practically useful. There are as many as six MPEG2 profiles though only two are currently relevent to broadcasting, main profile which is essentially MPEG-1 extended to take account of interlace scanning and encodes chroma 4:2:0 and professional profile which has 4:2:2 chrominance resolution and is designed for production and post production.
MPEG-2 makes extensive use of motion compensated prediction to eliminate redundancy. The prediction error remaining after motion compensation is coded using DCT, followed by quantisation and statistical coding of the remaining data. MPEG has two types of prediction. The so-called 'P' pictures are predicted only from pictures that are displayed before the current picture. 'B' pictures on the other hand are predicted from two pictures, one that is displayed earlier and one later. In order to do this non-causal prediction the encoder has to reorder the sequence of pictures before sending them to the decoder and then the decoder has to return them to the correct display order. B-pictures add complexity to the system but also produce a significant saving in bit-rate. An important feature of the MPEG prediction system is the use of 'I frames' that are coded without motion compensation. These break the chain of predictive coding so that channel switching can be done with a sufficiently short latency.
The most significant extension of MPEG-2 Main Profile over MPEG-1 is an improvement in options within a picture that can be used to do motion compensated prediction of interlaced signals. MPEG-1 treats each picture as a collection of samples from the same moment in time (known as frame-based coding). MPEG-2 understands about interlace, that samples within a frame come from two fields that may represent different moments of time, Therefore MPEG-2 has modes in which the data can either be predicted either using one motion vector to give an offset to a previous frame or two vectors giving offsets to two different fields.
MPEG-2 audio is a compatible extension of MPEG-1 audio. Audio compression relies on the fact that the ear cannot hear lower level sound frequencies close to larger ones. This psychoacoustic effect can be used to control the bit allocation to each sub-band. It achieves nearly transparent audio quality at 192 kbits/s/channel. With a minimal increase in bit-rate it is possible to encode Dolby Prologic surround sound signals. MPEG-2 audio's main extension to MPEG-1 is to provide compatible methods for coding multiple channel surround sound at between 384-512 kbits/s. Both MPEG-1 audio can be combined with MPEG-2 video or vice-versa.
The MPEG systems specification defines how to interleave multiple audio and video streams into a single stream, how to manage the buffering at the decoder, how to synchronise the streams on play back, and time identification for each of the streams. The MPEG-1 specification allows elementary streams sharing a common time-base to be multiplexed using a flexible packet size. The packet size is normally relatively large and is chosen by the application. MPEG-1 is suited to software processing, but is less satisfactory in an environment where data errors are common.
MPEG-2 extends this performance to allow:

  • Multiple programs with independent time-bases

  • Error prone environments

  • Remultiplexing

  • Support for scrambling

Two forms of multiplexed stream are defined by MPEG-2. The program stream and the transport stream. The program stream is similar to MPEG-1. All elementary streams share a common time-base, it has the same features as MPEG-1 but additionally supports scrambling, trick modes, a directory of the contents of the multiplex and a map describing the features of the streams. It is intended for use in storage-based interactive systems where software processing is important.


The transport stream is intended for broadcast systems where error resilience is one of the most important properties. It supports multiple programs with independent time-bases, multiplexed together with a fixed packet size of 188 bytes. It carries an extensible description of the contents of the programmes in the multiplex and supports remultiplexing and scrambling operations. (MPEG has not defined a method of scrambling - it has defined what can be scrambled and how access control data may be transmitted in an MPEG stream). Transcoding bctween the different MPEG-systems formats is possible and by suitable choices of parameters can be made relatively easy. It is likely that a systems digital interface specification may grow up around the transport stream.
Each of the MPEG specifications (audio, video and systems) allows encoders and decoders from different manufacturers to operate together. The interface between the two is the compressed bit-stream that represents the coded audio and video. To achieve interoperability MPEG has standardised the structure, content and meaning of the bit-stream and the way that it should be decoded to reconstruct the desired pictures or sound. The encoders arc not standardised. This approach has the advantage of leaving considerable freedom to encoder manufacturers to improve their encoding strategies as more is learnt about encoding, or to address different market segments with different trade offs of cost and complexity against picture quality.
The compliance parts of the standard specify when a bit-stream is compliant, when a decoder is compliant, and how to verify what has gone wrong if a decoder fails to decode a bit-stream properly. Compliant MPEG decoders are defined as being capable of decoding all bit-streams that comply to one of the defined profiles and levels. This means that all MPEG decoders decode only a sub-set of everything possible within MPEG. Decoders have to specify their capabilities (profile/level). MPEG has generated a number of test bit-streams that can be used to help with compliance testing. An essential tool in compliance testing is a bit-stream verifier. This is software that analyses a bit-stream to check whether or not it is compliant to the specification. It can be used as a 'referee' to determine whether it is the bit-stream or the decoder that is at fault if the system fails to work - if the bit-stream passes the verifier it must be decoder that is wrong.


Download 96.85 Kb.

Share with your friends:
  1   2




The database is protected by copyright ©ininet.org 2024
send message

    Main page