Centralized Counters Client Configuration
The following sections describe each one of the clients, the way it is triggered, and the format of the index that the client passes to its bound counter block. The clients are defined by CPSS_DXCH_CNC_CLIENT_ENT and can be enabled or disabled per a client by calling cpssDxChCncCountingEnableSet or by port, by calling cpssDxChCncPortClientEnableSet.
Enabling the CNC per port per client must be called as many times as the number of the clients enabled on a specific port.
L2/L3 Ingress VLAN Client
The L2/L3 Ingress VLAN client is defined as CPSS_DXCH_CNC_CLIENT_L2L3_INGRESS_VLAN_E and used to count traffic on a per-VLAN basis.
The VLAN ID used is the VLAN assigned after the Ingress Policy engine. This is the same VID used by the Bridge and Router engines.
Triggering
CNC is enabled for this client by calling cpssDxChCncCountingEnableSet with the CPSS_DXCH_CNC_COUNTING_ENABLE_UNIT_PCL_E parameter.
If the Ingress port is configured to enable the L2/L3 Ingress VLAN client, and one or more counter blocks are bound to the L2/L3 Ingress VLAN client, a counter update is triggered for all traffic received on the port.
Index Information
In eArch devices, the client is an eVLAN. Counter indexing can be done in one of VLAN-based modes defined by CPSS_DXCH_CNC_VLAN_INDEX_MODE_ENT and set by cpssDxChCncVlanClientIndexModeSet. Each mode has a different index calculation as shown in the tables in the Appendix CNC Indexing Format.
Ingress Policy Clients
The Ingress Policy engine supports the following lookups, where each lookup can serve as an independent client to the CNC unit. The Ingress Policy clients are:
IPCL0_0 – Defined by CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_0_E
IPCL0_1 – Defined by CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_0_1_E
eArch devices introduce a new mechanism for TCAM lookups, where each lookup 0,1 and 2 has parallel quad lookups. The differences between the lookup and sub-lookup is that in every lookup, there is an option to use a different key while in a parallel lookup, the key remains the same.
Each sub-lookup of a lookup can be a client and bind a CNC block.
The clients are listed as follows:
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_0_PARALLEL_0_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_0_PARALLEL_1_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_0_PARALLEL_2_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_0_PARALLEL_3_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_1_PARALLEL_0_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_1_PARALLEL_1_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_1_PARALLEL_2_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_1_PARALLEL_3_E
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_2_PARALLEL_0_E - N/A for Falcon
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_2_PARALLEL_1_E - N/A for Falcon
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_2_PARALLEL_2_E - N/A for Falcon
CPSS_DXCH_CNC_CLIENT_INGRESS_PCL_LOOKUP_2_PARALLEL_3_E - N/A for Falcon
Triggering
The PCL unit is enabled for CNC by calling cpssDxChCncCountingEnableSetwith the CPSS_DXCH_CNC_COUNTING_ENABLE_UNIT_PCL_E parameter.
For each PCL rule, it is possible to enable the usage of a CNC block and the index of the counter in the associated action. This is defined by CPSS_DXCH_PCL_ACTION_MATCH_COUNTER_STCand is part of CPSS_DXCH_PCL_ACTION_STCset by cpssDxChPclRuleSet.
Index Information
The Ingress Policy Clients pass the matchCounterIndex to the counter block. The application must ensure that the index is in the range of the block. The counter is incremented every time the rule is matched. For a detailed description of the indexing, see tables in CNC Indexing Format.
Ingress VLAN Pass/Drop Client
The Ingress VLAN client, defined by CPSS_DXCH_CNC_CLIENT_INGRESS_VLAN_PASS_DROP_E, is used to count the allowed Ingress traffic and dropped Ingress traffic on a per-VLAN basis. The VLAN used is the VLAN assigned after the TTI lookup. In eArch devices, the counters are used per eVLAN.
Triggering
Enable the client per port to count all packets with the command FORWARD, MIRROR, SOFT_DROP, or HARD_DROP. Additionally, FROM_CPU traffic can be counted by calling cpssDxChCncIngressVlanPassDropFromCpuCountEnableSet. In eArch devices, the counting is enabled per physical port.
Index Information
The VLAN Ingress counter index that is passed to the bound counter block(s) contains the following information:
VLAN-ID assigned to the packet after the TTI lookup
Drop/Pass command:
0 – Packet passes through the Ingress pipeline (at the Pre-Egress engine, the packet command is FORWARD, MIRROR, and optionally FROM_CPU)
1 – Packet is dropped by the Ingress pipeline (at the Pre-Egress engine, the packet command is HARD_DROP or SOFT_DROP)
For a detailed description of the indexing, see tables in CNC Indexing Format.
Egress VLAN Pass/Drop Client
The Egress VLAN client, defined by CPSS_DXCH_CNC_CLIENT_EGRESS_VLAN_PASS_DROP_E, is used for counting the Egress queued traffic and Egress drop traffic separately, on a per VLAN basis.
In eArch devices, the counters are used per eVLAN.
The CPSS version for Falcon only counts tail-drops. On other devices, there are 3 ways to define how dropped traffic is counted:
A drop-counter counts egress-filtered and tail-dropped traffic.
A drop-counter counts egress-filtered traffic only.
A drop-counter counts tail-drop only.
These options are defined by CPSS_DXCH_CNC_EGRESS_DROP_COUNT_MODE_ENT. Egress-filtered traffic is traffic filtered due to VIDX filtering, Egress VLAN filtering, Egress spanning tree filtering, Egress source-ID filtering, Source port/trunk filtering, Trunk Multicast filtering, and so on.
Tail-dropped traffic is traffic filtered due to tail drop thresholds (see Transmit Queues Manager).
Triggering
The Egress VLAN client is triggered by the Egress pipeline during the queuing stage. There is no need to trigger the counting in CNC globally—only per port.
To set the Egress VLAN drop counting mode, call cpssDxChCncEgressVlanDropCountModeSet.
Index Information
The Egress VLAN Counter index that is passed to the counter block(s) contains the following information:
Packets Egress VID
If the packet was routed, this is the VLAN assigned by the router’s next hop entry or Multicast Linked List (MLL) entry. If the packet was bridged, this is the VLAN used by the Bridge engine.
The Egress VID may still be subsequently modified by the Egress Policy engine and Egress VLAN Translation mechanism.
Drop/Pass command:
0 – Packet is passed
1 – Packet is dropped
For a detailed description of the indexing, see tables in CNC Indexing Format.
Egress Queue Client
The Egress Queue client, defined by CPSS_DXCH_CNC_CLIENT_EGRESS_QUEUE_PASS_DROP_E, is used for separately counting the Egress-queued traffic and Egress-dropped traffic on a per port/traffic-class/drop precedence basis. In Lion2 devices and in eArch devices,
the Egress Queue client is also used for counting QCN queued messages and QCN dropped messages.
These modes are defined by CPSS_DXCH_CNC_EGRESS_QUEUE_CLIENT_MODE_ENT:
Tail-drop Counting Mode – Implements Egress packet pass/drop counters on a per port/traffic-class/drop-precedence basis. For Gen6 devices, there is an optional reduced Tail-Drop counting mode CPSS_DXCH_CNC_EGRESS_QUEUE_CLIENT_MODE_TAIL_DROP_REDUCED_E, which is unaware of drop precedence. Note, that indexing format is different in this mode!
CN Mode (for Lion2 devices, and Gen5/Gen6 devices) – Implements CN message pass/drop counters and non-CN message pass/drop counters.
The mode is set by calling cpssDxChCncEgressQueueClientModeSet.
Triggering Client in Tail-Drop Counting Mode
The Egress Queue client is triggered by the Egress pipeline during the queuing stage. If a counter block is bound to the Egress Queue client, a counter update is triggered for queued and tail-dropped traffic. There is no need to explicitly trigger the client.
Index Information
The Egress Queue client index that is passed to the counter block(s) contains the DP, TC, Port number and Pass/Drop command.
For a detailed description of the indexing, see tables in CNC Indexing Format.
In Lion2, a specific counter associated with the Egress port and queue is not counted only in the related Egress port group.<To keep the wire speed there is an arbiter that selects a port group in the relevant hemisphere and the counter in the appropriate index is incremented. for example, 4 packets from same flow will be counted in 4 port groups> Therefore, the user must:
Ensure that the TxQ client has an associated CNC block in all port groups.
Get the correct value of the packets by summarizing the relevant counters in 4 port groups. To do so, there are 2 options. One is to loop through the relevant port groups, read the counter by calling cpssDxChCncPortGroupCounterGet and summarize the values. The second option is to call cpssDxChCncPortGroupCounterGet with portGroupsBmp = 0x0F if the port is associated with port group 0-3, or with value of 0xF0 if the port is associated with port group 4-7.
Code Example
The following two functions show an example of setting and reading the Egress Queue counters:
/* samplePort is the port used in SetTCCounters, it identifies what index range is mapped into the block, since each range is for set of 64 ports
Use like GetTCCounters 0,1 */
GT_STATUS GetTCCounters(GT_U32 startBlock, GT_U32 samplePort)
{
CPSS_DXCH_CNC_COUNTER_STC tcCount;
GT_STATUS rc;
GT_U16 i, offset, blockSize;
GT_U16 dp, tc, tc_swap, port, port_swap, dropped;
if(PRV_CPSS_DXCH_PP_HW_INFO_E_ARCH_SUPPORTED_MAC(0) == GT_TRUE)
blockSize = 1024;
else
blockSize = 2048;
for(i=0; i< blockSize;i++)
{
rc = cpssDxChCncCounterGet(0, startBlock, i, CPSS_DXCH_CNC_COUNTER_FORMAT_MODE_0_E, &tcCount);
if( GT_OK != rc)
return rc;
if(tcCount.packetCount.l[0] != 0)
{
if(PRV_CPSS_DXCH_PP_HW_INFO_E_ARCH_SUPPORTED_MAC(0) == GT_TRUE)
{
offset = samplePort/32;
port = (i>>5) & 0x00ff + offset*64;
tc = (i>>2) & 0x0007;
dp = i & 0x0003;
osPrintf("counter %d: port %d queue %d dp level %d ",i,
port+offset*32, tc, dp);
osPrintf("%d packets PASSED %u bytes\n",tcCount.packetCount.l[0],
tcCount.byteCount.l[0] );
}
else /* for AC3 */
{
offset = samplePort/64; /* not sure ... */
dp = i & 0x0003;;
tc = (i>>2) & 0x0007;
port = (i>>5) & 0x001f;
dropped = (i>>10) & 0x0001;
osPrintf("counter %d: on port %d on queue %d dp level %d\n ",i,
port+offset*64, tc, dp);
if(dropped)
osPrintf("%d packets DROPPED %u bytes\n",tcCount.packetCount,
tcCount.byteCount.l[0] );
else
osPrintf("%d packets PASSED %u bytes\n",tcCount.packetCount,
tcCount.byteCount.l[0] );
}
}
rc = cpssDxChCncCounterGet(0, startBlock+1, i, CPSS_DXCH_CNC_COUNTER_FORMAT_MODE_0_E, &tcCount);
if( GT_OK != rc)
return rc;
if(tcCount.packetCount.l[0] != 0)
{
if(PRV_CPSS_DXCH_PP_HW_INFO_E_ARCH_SUPPORTED_MAC(0) == GT_TRUE)
{
offset = samplePort/32;
port = (i>>5) & 0x00ff + offset*64;
tc = (i>>2) & 0x0007;
dp = i & 0x0003;
osPrintf("counter %d: port %d queue %d dp level %d ",i,
port+offset*32, tc, dp);
osPrintf("%d packets DROPPED %u bytes\n",tcCount.packetCount.l[0],
tcCount.byteCount.l[0] );
}
}
}
return rc;
}
/* use like SetTCCounters 0,1 */
GT_STATUS SetTCCounters(GT_U32 startBlock, GT_U32 port)
{
GT_STATUS rc = GT_OK;
GT_U16 i,r;
GT_U64 indexRangesBmp;
rc = cpssDxChCncBlockClientEnableSet(0,startBlock, CPSS_DXCH_CNC_CLIENT_EGRESS_QUEUE_PASS_DROP_E,GT_TRUE);
if( GT_OK != rc)
return rc;
rc = cpssDxChCncBlockClientEnableSet(0,startBlock+1, CPSS_DXCH_CNC_CLIENT_EGRESS_QUEUE_PASS_DROP_E,GT_TRUE);
if( GT_OK != rc)
return rc;
/* in BC2 every range covers 32 ports, so 256 ports or 8 ranges are supported */
r = port << 5;
r = r/1024;
r = port/32;
indexRangesBmp.l[0] = 1 << r;
indexRangesBmp.l[1] = 0;
osPrintf("pass range is %d\n", indexRangesBmp.l[0]);
rc = cpssDxChCncBlockClientRangesSet(0,startBlock,
CPSS_DXCH_CNC_CLIENT_EGRESS_QUEUE_PASS_DROP_E,
indexRangesBmp);
if( GT_OK != rc)
return rc;
indexRangesBmp.l[0] = 1 << (r+8);
indexRangesBmp.l[1] = 0;
osPrintf("drop range is %d\n", indexRangesBmp.l[0]);
rc = cpssDxChCncBlockClientRangesSet(0,startBlock+1,
CPSS_DXCH_CNC_CLIENT_EGRESS_QUEUE_PASS_DROP_E,
indexRangesBmp);
if( GT_OK != rc)
return rc;
return rc;
}
Egress Policy Client
The Egress Policy client, defined by CPSS_DXCH_CNC_CLIENT_EGRESS_PCL_E, is used for counting traffic on a per Egress rule basis.
In eArch devices, the EPCL has parallel quad sub-lookups defined as:
CPSS_DXCH_CNC_CLIENT_EGRESS_PCL_PARALLEL_0_E
CPSS_DXCH_CNC_CLIENT_EGRESS_PCL_PARALLEL_1_E
CPSS_DXCH_CNC_CLIENT_EGRESS_PCL_PARALLEL_2_E
CPSS_DXCH_CNC_CLIENT_EGRESS_PCL_PARALLEL_3_E
Each sub-lookup can be set as a client.
Triggering
The Egress PCL client is enabled by calling cpssDxChCncCountingEnableSetwith the CPSS_DXCH_CNC_COUNTING_ENABLE_UNIT_PCL_E parameter.
A counter update is triggered if the Egress Policy TCAM lookup results in a match. See the matchCounter member of the CPSS_DXCH_PCL_ACTION_STC data type for the counter enable flag and the counter block index.
Index Information
The Egress Policy client index that is passed to the counter block(s) contains the <CounterIndex> field extracted from the Egress Policy Action Table. This field is a pointer to one of the 32 policy rule match counters. The counter is incremented for every packet matching this rule.
For a detailed description of the indexing, see tables in CNC Indexing Format.
ARP Table Client
This section is relevant for:
a xCat3 / AlleyCat5
a Lion2 devices
a Gen5 devices and above
The ARP Table client, defined by CPSS_DXCH_CNC_CLIENT_ARP_TABLE_ACCESS_E, is used for counting routed traffic on per next hop MAC address.
Triggering
The ARP Table client is globally enabled if one or more blocks of CNC are bound to it.
Index Information
The ARP Table client index for the centralized counter block(s) is the index used to access the ARP Table (the ARP Pointer), which is assigned by the Router next hop entry.
This section is relevant for Gen5 devices and above
In the above devices, the ARP pointer is also used as the NAT Table Client index to the centralized counter block(s). Since the ARP pointer and NAT pointer are multiplexed on the same client, an offset is added to point to the NAT index so that the NAT index equals to the ARP/NAT pointer plus the offset. The offset is a global configuration and is configured using cpssDxChCncOffsetForNatClientSet.
For a detailed description of the indexing, see tables in CNC Indexing Format.
Tunnel-Start Client
This section is relevant for:
a xCat3 / AlleyCat5
a Lion2 devices
a Gen5 devices and above
The Tunnel-Start client, defined by CPSS_DXCH_CNC_CLIENT_TUNNEL_START_E, can be used for counting traffic that egressed on a tunnel-start interface.
Triggering
The Tunnel-Start client is globally enabled if one or more blocks of CNC are bound to it.
Index Information
The index used to access the Tunnel-start Table (aka the Tunnel Pointer) is used as the Tunnel-start Table client index for the centralized counter block(s).
For a detailed description of the indexing, see tables in CNC Indexing Format.
Tunnel Termination Interface (TTI) Client
This section is relevant for:
a xCat3 / AlleyCat5
a Lion2 Devices
a Gen5 devices and above
The Tunnel Termination Interface engine, defined by CPSS_DXCH_CNC_CLIENT_TTI_E, supports one lookup.
Gen5 devices and above introduce a mechanism enabling 2 parallel lookups defined by CPSS_DXCH_CNC_CLIENT_TTI_PARALLEL_0_EandCPSS_DXCH_CNC_CLIENT_TTI_PARALLEL_1_E which can be bound to a different block of counters.
Bobcat3 and higher devices implement two additional CNC clients: CPSS_DXCH_CNC_CLIENT_TTI_PARALLEL_2_E and CPSS_DXCH_CNC_CLIENT_TTI_PARALLEL_3_E
These are used in TTI TCAM parallel lookups 2 and 3 respectively.
Triggering
CNC is enabled for this client by calling cpssDxChCncCountingEnableSet with the CPSS_DXCH_CNC_COUNTING_ENABLE_TTI_UNIT_E parameter. Every hit on the TTI rule will increment the relevant counter.
Bobcat3 and higher devices implement two additional CNC clients for use with TTI TCAM parallel lookups 2 and 3 respectively:
CPSS_DXCH_CNC_CLIENT_TTI_PARALLEL_2_E
CPSS_DXCH_CNC_CLIENT_TTI_PARALLEL_3_E
Index Information
The TTI Client specifies the counter index in the TTI Action entry.
For a detailed description of the indexing, see tables in CNC Indexing Format.
Ingress Source ePort Client
This section is relevant for Gen5 devices and above
The Ingress source ePort client, defined by CPSS_DXCH_CNC_CLIENT_INGRESS_SRC_EPORT_E, is used to count packets and bytes per Ingress source ePort. The used ePort is the one that was assigned after all IPCL lookups.
Triggering
The Ingress source ePort client is globally enabled if one or more blocks of CNC are bound to it.
Index Information
The Ingress source ePort Client is indexed by the ePort. For a detailed description of the indexing, see tables in CNC Indexing Format.
Egress Target ePort Client
This section is relevant for Gen5 devices and above
The Egress Target ePort client, defined by CPSS_DXCH_CNC_CLIENT_EGRESS_TRG_EPORT_E, is used to count packets and bytes per Egress target ePort. The target ePort is the destination ePort assigned by one of the forwarding engines. Note that it must be an individual local ePort. If forwarding is destined to a Multicast group, the ports composing the group will not be included in the counting.
Triggering
The Egress target ePort client is globally enabled if one or more blocks of CNC are bound to it.
Index Information
The Egress Target ePort Client is indexed by the ePort assigned by the tunnel start entry. For a detailed description of the indexing, see tables in CNC Indexing Format.
Packet Type Pass/Drop Client
This section is relevant for Gen5 devices and above
The Packet Type Pass/Drop client, defined by CPSS_DXCH_CNC_CLIENT_PACKET_TYPE_PASS_DROP_E, is used to count packets and bytes in the Pre-egress unit after all the replications, with the TO_CPU command. There are 2 sub-classifications of the packet type defined by CPSS_DXCH_CNC_PACKET_TYPE_PASS_DROP_TO_CPU_MODE_ENT. One is based on the physical port and the other one is set by the CPU CODE. Setting one of these modes is done by calling cpssDxChCncPacketTypePassDropToCpuModeSet.
Triggering
The Packet Type Pass/Drop client is globally enabled if one or more blocks of CNC are bound to it.
Index Information
The Packet Type Pass/Drop Client has 2 different indexing modes. For a detailed description of the indexing, see tables in CNC Indexing Format.
Traffic Manager (TM) Pass/Drop Client
This section is relevant for Gen5 devices and above
The TM Pass/Drop client, defined by CPSS_DXCH_CNC_CLIENT_TM_PASS_DROP_E, is used to count packets processed by the TM. Note that the TM itself does not drop packets. The PP queries the TM before sending a packet to it. Upon the TM response, the counters are incremented.
For more information on the TM architecture, see Traffic Manager (TM).
An example of using TM CNC counters is found in the cpssEnabler directory - look for configTMQCounters() and getTMQCounters().
Triggering
The TM client is globally enabled if one or more blocks of the CNC are bound to it.
Index Information
The TM Pass/Drop client has 4 different indexing modes defined by CPSS_DXCH_CNC_TM_INDEX_MODE_ENT. To set the required mode, call cpssDxChCncTmClientIndexModeSet. For details on the indexing format, see tables in CNC Indexing Format.根据以上内容
因转发异常丢弃的报文。即由于芯片表项缺失、芯片表项下发错误、报文检查错误等异常场景导致的丢包。
因转发正常丢弃的报文。即丢包是可以预测的,比如下发了端口隔离需要丢掉两个端口之间的报文、配置了惩罚动作产生的丢包如MAC漂移联动流量抑制导致的丢包、端口配置了水平分割导致的丢包等。
因缓存满丢弃的报文。即超过了端口或队列的buffer缓存,导致丢包。
因ACL DENY丢弃的报文。即下发了匹配某些条件的ACL DENY规则产生的丢包。
这四种丢包如何进行统计