1.1Purpose and Scope
The Acceptance Test Plan (ATP) is a compendium of test procedures that may be used to demonstrate that an M-UDCP complies with certain Host specifications. CableLabs and Vendors have developed these procedures to facilitate the product development and verification efforts of M-UDCP equipment suppliers. The ATP procedures contained herein shall be used by CableLabs and by any other party seeking to Verify, Self-Verify, or otherwise certify that an M-UDCP complies with the relevant specifications.
The ATP techniques are not necessarily the only methods for demonstrating compliance. It may be possible to demonstrate compliance using other procedures. CableLabs will tend to use ATP procedures when auditing applications for Host verification on behalf of the DFAST certification authority.
The ATP procedures are “black box” tests; that is, they do not require opening the equipment under test to access special test points or to invoke test modes of operation. There are requirements that cannot be verified by black box techniques and supplier-proprietary procedures are required to test such requirements. These supplier-proprietary test procedures are beyond the scope of the ATP.
The ATP procedures are not complete. Subsequent revisions of this document may contain additional test procedures. However, for various reasons, additional procedures will always be required to show compliance to some requirements. Furthermore, some procedures in some revisions of the ATP are incompletely specified outlines of tests that are under development.
1.2Test Definitions
Table 1.2-1 - Test Definitions
DUT
|
Device under Test
|
Low channel
|
77.25 MHz analog visual carrier, 79 MHz center freq QAM (channel 5)
|
Mid channel
|
211.25 MHz analog visual carrier, 213 MHz center freq QAM (channel 13)
|
High channel
|
571.25 MHz analog visual carrier, 573 MHz center freq QAM (channel 82)
|
Default V/A ratio
|
13 dB Video to Audio carrier ratio for NTSC signals
|
Default QAM signal levels
|
0 dBmV is normal for analog, -7 dBmV for 256 QAM, -10 dBmV for 64 QAM
|
Default digital video
|
HD moving zone plate
|
Default QPSK signal level
|
-7 dBmV
|
Notes:
-
Individual tests procedures that specify a specific set-up must be followed. In the test procedures where no specific setup is specified the “Default” value/level from above may be used.
-
The M-CARD should be inserted into DUT for as many of the RF physical layer characteristics tests possible.
Proper Reception:
For analog signals: Means picture and sound with no impairments lower than a rating of 4 on the ITU-R BT.500 scale. (“Perceptible, but not annoying”)
For digital signals: Means picture and sound with no impairments lower than a rating of 4 on the ITU-R BT.500 scale. (“Perceptible, but not annoying”)
Table 1.2-2 - FDC Noise Power bandwidths
Transmission Rate
|
Noise BW
|
Correction Factor
|
1.544 Mbps
|
1.00 MHz
|
60 dB
|
2.048 Mbps
|
1.33 MHz
|
61 dB
|
3.088 Mbps
|
2.01 MHz
|
63 dB
|
CHANNEL POWER MEASUREMENT
Ensure that the spectrum analyzer has been on for at least 0.5 hrs and is calibrated. Refer to equipment manual for proper calibration procedure.
Configure the analyzer as follows:
Table 1.2-3 - Analyzer Configuration
Mode
|
Spectrum Analyzer
|
Center Frequency
|
Center of channel under test
|
Amplitude Units
|
dBmV
|
Input Z
|
75ohms
|
Ref Level
|
Positioned to upper 1/10 of display
|
Span
|
Wide enough to capture entire signal under test (6 MHz for FAT)
|
RBW
|
300 kHz
|
VBW
|
30 kHz or less
|
Video Averaging
|
ON
|
Change Marker Mode to display power on a per Hertz basis (dBmV/Hz).
|
Move the marker to the center of the signal and record the power level as displayed.
|
Refer to table and obtain the proper bandwidth correction factor for the signal being measured.
|
The actual power level of the signal is as follows:
Channel Power = Displayed marker level (dBmV/Hz) + correction factor
For FAT QAM channels the correction to be used is 68 dB.
|
Share with your friends: |