Introduction
This document describes the best practices and system checks to ensure that a Remote PHY (RPHY) and Converged Interconnected Network (CIN) environment can operate efficiently based on the CableLabs RPHY Specifications.
Contributed by Andy Moyer, Cisco TAC Engineer.
Prerequisites
Requirements
Cisco recommends that you have knowledge of these topics:
- Remote PHY Device (RPD)
- Cisco converged Broadband Router (cBR-8)
- Data Over Cable Service Interface Specification (DOCSIS)
- Quality of Service (QoS)
Components Used
The information in this document is based on the cBR-8 hardware.
The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, ensure that you understand the potential impact of any command.
DSCP Values
The Precision Time Protocol (PTP) traffic to the core and the RPD must be prioritized so that PTP packets are not lost. The RPD must support the IETF RFC 2475 Differentiated Services Code Point (DSCP) values for Expedited Forwarding (EF) and Best Effort (BE) for Downstream External PHY Interface (DEPI) tunnels as seen in the CableLabs RPHY specification: CM-SP-R-PHY-I14-200323. PTP traffic is prioritized within the CIN and the common practice is to use the same DSCP values as DEPI Tunnels. The DSCP values on the RPD are fixed in the code and PTP is assigned value of 46.
Item
|
Per-Hop-Behavior
|
DSCP Value
|
DOCSIS data (L2TP)
|
BE
|
0
|
PTP
|
EF
|
46
|
GCP
|
BE
|
0
|
MAP/UCD
|
EF
|
46
|
BWR/RNG_REQ
|
EF
|
46
|
Video
|
CS4
|
32
|
MDD, Voice
|
CS4
|
32
|
Acronym
|
Definition
|
L2TP
|
Layer 2 Tunnel Protocol
|
GCP
|
Generic Control Protocol
|
MAP
|
Bandwidth Allocation Map
|
UCD
|
Upstream Channel Descriptor
|
BWR
|
Bandwidth Request
|
RNG_REQ
|
Range Request
|
MDD
|
MAC Domain Descriptor
|
Calculate Bandwidth
- All of the devices in the path from Core to RPD must reserve sufficient bandwidth at high priority over all other traffic to carry all the MAPs, UCDs, BWR/RNG_REQ, and PTP traffic. These formulas can be used to calculate the total EF bandwidth:
Total EF Bandwidth = MAP/UCD BW + BWR/RNG_REG BW + PTP BW
MAP/UCD BW in bits per sec = 500 Maps/sec * 8 bits/byte * MAP-Size * No.-of-Primary-DS * No.-of-US * 2 for UEPI Maps
Worst case MAP-Size: SC-QAM: 660Bytes, OFDMA: 1450bytes
Note: 38.8 Mbps is the total bandwidth of an 256 QAM SC-QAM with overhead. In order to calculate, use the highest rate in each Orthogonal Frequency Division Multiplexing (OFDM) channel you have configured.
From cBR-8:
cBR8# show controllers downstream-Cable <x/x/x> rf-channel 158 verbose | include rate
CTRL profile (Profile A): rate: 496000 kbps
Data profile 1 (Profile B): rate: 619000 kbps
cBR8# show controller downstream-Cable <x/x/x> counter rf-channel | count DOCSIS
Number of lines which match regexp = 32
-
All of the devices in the path from CIN to RPD must reserve enough total bandwidth through the entire path to avoid loss of data traffic. In order to calculate the required bandwidth, count the number of Downstream (DS) Single Channel - Quadrature Amplitude Modulation (SC-QAM) and multiply by 38. Then, add the OFDM channel rate listed in Data Profile 1 seen from CLI.
- Multiply the number of OFDM DS by this number instead of 38 for OFDM channel rate.
- Total Guaranteed BW on CIN = {number of DS} * 38 + OFDM Channel rate.
CIN Checks and Outcomes
If the CIN uses Layer 3 (L3) routing, ensure that the path from core to RPD is unique/unambiguous. If the packets take multiple routes, it can cause a Cable Modem (CM) to provide unpredictable throughput. Here are some of the issues that can be observed due to CIN instability.
- Low TCP/UDP throughput
- TCP retries and retransmits
- Late MAPs observed on the RPD
- Time synchronization loss or a switch from PHASE-LOCK to holdover and back
- If there are MAP packets that have been missed
- If the “
SeqErr-sum-pkts
” increase in all DS channels
- If the
"Drop-sum-pkts"
increase in all US channels
Note: In command examples, ellipsis (...) indicate some information has been omitted for readability.
From RPD:
A. Upstream Map counter by channel:
R-PHY#
show upstream map counter 0 <ch-id>
If there is an increase in the amount of un-mapped minislots in this output, that indicates MAP's were lost.
R-PHY# show upstream map counter 0 0
Map Processor Counters
==============================================
Mapped minislots : 297797435
Discarded minislots (chan disable): 0
Discarded minislots (overlap maps): 0
Discarded minislots (early maps) : 0
Discarded minislots (late maps) : 0
Unmapped minislots : 0
Last mapped minislot : 3003775
B. Downstream Channel Counters:
R-PHY# show downstream channel counter
Repeat this command serveral times over 10 seconds
R-PHY# show downstream channel counter
------------------- Packets counter in TPMI -------------------
Level Rx-pkts Rx-sum-pkts
Node Rcv 160159 160159
Depi Pkt 0 0
Port Chan Rx-pkts Rx-sum-pkts
Port Rx-pkts Rx-sum-pkts Drop-pkts Drop-sum-pkts
DS_0 160201 160201 0 0
US_0 2417 2417 0 0
US_1 2417 2417 0 0
------------------- Packets counter in DPMI -------------------
Field Pkts Sum-pkts
Dpmi Ingress 1260566 77868982
Pkt Delete 0 0
Data Len Err 0 0
Chan Flow_id SessionId(dec/hex) Octs Sum-octs SeqErr-pkts SeqErr-sum-pkts
0 0 4390912 / 0x00430000 950 1684498 0 1
0 1 4390912 / 0x00430000 24088 1612049 0 1
0 2 4390912 / 0x00430000 7686168 474015682 0 0
0 3 4390912 / 0x00430000 0 0 0 0
1 0 4390913 / 0x00430001 704757 40898198 0 1
1 1 4390913 / 0x00430001 510 30974 0 1
1 2 4390913 / 0x00430001 0 0 0 0
...
Information about DLM
The DEPI Latency Measurement (DLM) packet is a specific type of data packet used to measure the network latency between the Converged Cable Access Platform (CCAP) core and the RPD. There are two types of DLM packets; ingress DLM packet and egress DLM packet. The ingress DLM measures the latency between the CCAP core and the ingress point in the RPD, and the egress DLM measures the latency between the CCAP core and the egress point of the RPD.
The Use of DLM
Note: This feature is disabled by default.
Configuration
cBR-8# conf t
cBR-8(config)# cable rpd <name>
cBR-8(config-rpd)# core-interface tenGigabitEthernet <interface x/x/x>
cBR-8(config-rpd-core)# network-delay dlm <interval in seconds>
Verification of an RPD
cBR-8# show cable rpd <xxxx.xxxx.xxxx> dlm
Load for five secs: 4%/1%; one minute: 4%; five minutes: 4%
Time source is NTP, 13:12:36.253 CST Sun Jan 1 2017
DEPI Latency Measurement (ticks) for 0000.bbaa.0002
Last Average DLM: 4993
Average DLM (last 10 samples): 4990
Max DLM since system on: 5199
Min DLM since system on: 4800
Sample # Latency (usecs)
x------------x------------
0 491
1 496
2 485
3 492
4 499
5 505
6 477
7 474
8 478
9 47
Test Commands for Additional Information
From the cBR-8, log into the line card then run these test commands.
cBR-8# request platform software console attach <LC#/0>
Summary of all RPD's that use DLM:
Slot-1-0# test cable md cdman show dlm 1 summary
DLM info summary
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.224.98 interval: 1 status: inact [0]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.224.97 interval: 1 status: inact [1]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.224.96 interval: 1 status: inact [2]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.224.99 interval: 1 status: inact [3]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.224.95 interval: 1 status: inact [4]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.227.96 interval: 1 status: inact [5]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.227.95 interval: 10 status: inact [6]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.227.94 interval: 1 status: inact [7]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.222.99 interval: 1 status: inact [8]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.222.97 interval: 1 status: inact [9]
rpd_id: xxxx.xxxx.xxxx rpd_ip: 10.240.222.98 interval: 1 status: inact [10]
Total 11 DLM info (max 80) ucast/mcast/recv_valid/lost/recv_all(pkts): 1000/200/1200/0/1200 <<<<<<<DLM Packets
Ctrlr DLM info summary
ctrlr: 8 rpd_id: xxxx.xxxx.xxx1 status: inact [8][0]
ctrlr: 9 rpd_id: xxxx.xxxx.xxx2 status: inact [9][0]
ctrlr: 10 rpd_id: xxxx.xxxx.xxx3 status: inact [10][0]
ctrlr: 16 rpd_id: xxxx.xxxx.xxx4 status: inact [16][0]
ctrlr: 17 rpd_id: xxxx.xxxx.xxx5 status: inact [17][0]
ctrlr: 18 rpd_id: xxxx.xxxx.xxx6 status: inact [18][0]
ctrlr: 19 rpd_id: xxxx.xxxx.xxx7 status: inact [19][0]
ctrlr: 20 rpd_id: xxxx.xxxx.xxx8 status: inact [20][0]
ctrlr: 30 rpd_id: xxxx.xxxx.xxx9 status: inact [30][0]
ctrlr: 30 rpd_id: xxxx.xxxx.xx10 status: inact [30][1]
ctrlr: 31 rpd_id: xxxx.xxxx.xx11 status: inact [31][0]
Slot-1-0# test cable md cdman show dlm 1 ipv4 <ipv4 address>
Slot-1-0#
rpd_id: 0000:0000:0000 ctrlr: 17 channel: 0
session_id: 0 local_session_id: 0
slot: 1 local_port_id: 13 te_port: 4
interval: 1 measure_only: 0 static_cin_delay: 0 static_cin_delay_usec: 0
IP mcast: <mcast addr> mcast_sec: ucast: <ucast ipv4 addr> src: <source IP> dst:
MAC src: 0000:0000:0000 next_hop: 0000:0000:0000
DLM effect: false
in_use: true refresh_mapadv: true cdm_pak_size: 66
cdm_trans_id: 0 trans_id: 0 trans_id_m_cnt: 0
rpd: ucast/mcast/recv/lost(pkts): 0/0/0/0 trigger_cnt: 0
all: ucast/mcast/recv_valid/lost/recv_all(pkts): 0/0/0/0/0
time_start: [ 0 0 0 0 0 0 0 0 0 0 ]
time_end: [ 0 0 0 0 0 0 0 0 0 0 ]
ingress: [ 0 0 0 0 0 0 0 0 0 0 ] ingress_idx: 0
timestamp: [ 0 0 0 0 0 0 0 0 0 0 ]
seq_num: [ 0 0 0 0 0 0 0 0 0 0 ]
delay_ticks min/max/avg/last_avg/sum: 0/0/0/0/0
except_cnt: 0
full_samples: false
ctrlr: 17 rpd_id: xxxx.xxxx.xxxx status: inact [17][0]
Debugs
Debug the RPD DEPI session and events, as well as DLM.
cBR-8# debug cable rpd depi
cBR-8# debug cable rpd r-depi
cBR-8# debug cable dlm tx
cBR-8# debug cable dlm rx
Related Information