...
Cat3650/Cat3850/Cat9k switch is receiving a multicast stream on ingress interface [let's call it interface A] There is no QoS policy in ingress direction on interface A This interface A is also recognised as Mrouter port from IGMP Snooping perspective Client directly connected behind this Cat3650/Cat3850/Cat9k switch is sending unicast traffic that is egressing via interface A There is QoS policy in egress direction on interface A with policer applied The intention is to police the unicast traffic stream originating from the client directly connected behind this Cat3650/Cat3850/Cat9k switch and the rate is set accordingly It is observed that both the unicast stream as well as multicast stream [both having the same DSCP marking] are accounted by the configured policer which negatively impacts the unicast stream A picture is worth a thousand words multicast stream unicast traffic 239.100.1.1 from 10.0.200.100 to 10.0.200.200 +---------> <--------------+ +----------------------+ | | Gi1/0/5 Gi1/0/1 | Cat9300 DUT | Gi1/0/3 Gi1/0/5 Multicast source +-------------------------+ Layer2 switch +---------------------------+ Multicast receiver Vlan200: | | Vlan200: 10.0.200.200/24 ^ | | 10.0.200.100/24 | | | | +----------------------+ | | | | | + QoS policy attached here in OUTPUT direction
Multicast stream ingressing on an interface [which is recognised as Mrouter port from IGMP Snooping perspective] where egress QoS policy is configured with policer action The overall rate of ingressing multicast stream rate together with egressing unicast stream rate is above the configured rate and drop action is being imposed on the excess traffic of both multicast as well as unicast stream
do not configure egress QoS policy with policer configure policer rates high enough so that unicast stream is not impacted configure different DSCP marking for unicast stream and multicast stream and modify the classes accordingly so that those streams do not go to the same class Match the INPUT multicast traffic and assign it to a QoS Group in the INPUT and match in the QoS Group in the OUTPUT policy to achieve multicast traffic being queued to a separate queue original OUTPUT policy ! policy-map POLICY_5M-out class class-default police cir 5000000 conform-action transmit exceed-action drop ! new set of policies ! ip access-list extended MCAST_ACL permit ip any 239.100.1.0 0.0.0.255 ! class-map match-any MCAST_CM match access-group name MCAST_ACL ! policy-map MCAST_PM_IN_WORKAROUND class MCAST_CM set qos-group 7 ! class-map match-any MCAST_CM_OUT match qos-group 7 ! policy-map POLICY_5M-out class MCAST_CM_OUT ! interface service-policy input MCAST_PM_IN_WORKAROUND service-policy output POLICY_5M-out !
UADP ASIC has an Ingress pipeline [where Ingress lookups are made] and an Egress pipeline [where Egress lookups are made]. At each lookup stage we get a result of what to do with a packet and we accumulate those results. The final fate of the packet is decided in Ingress Global Resolution [IGR block for Ingress pipeline] and in Egress Global Resolution [EGR block for Egress pipeline] where all the partial results are collected and combined and evaluated to derive the final comprehensive decision. When IGMP Snooping is enabled it helps us constrain the delivery of multicast stream just to hosts that expressed their interest via the means of IGMP Join message. However we need to realize where the multicast stream would be replicated to. The most obvious answer is that it will be replicated to those hosts that expressed their interest in such a stream via IGMP Join message. That is indeed true however is it not the whole truth. IGMP Snooping will constrain the delivery of multicast stream to those hosts that sent IGMP Join message for such a group AND mrouter port(s). In this scenario our INGRESS interface [where the multicast stream is being received] is recognised as Mrouter port because the upstream router [Multicast source] is running PIM towards this interface. As such we can see that our multicast packet is replicated to two interfaces, Gi1/0/1 [expected, this is mrouter port] and Gi1/0/3 [expected, our multicast receiver is connected there]. Let’s focus on the life of the packet that is replicated for going out of Gi1/0/1. In our case, one of the Egress lookup stages in the Egress pipeline is the QoS lookup stage since we have QoS policy applied on Gi1/0/1 in the Egress direction [and thus QoS lookup is done in Egress pipeline]. From the QoS perspective [which has nothing to do with actual forwarding perspective] this is legitimate traffic that is supposed to egress out via Gi1/0/1 which has egress QoS policy applied and thus this packet is subject to QoS and will be matched based on DSCP marking and queued/policed according to the configuration. This explains why it is accounted as conformed/exceeded rate, simply because it is subject to QoS lookup in the egress pipeline. Another lookup stage in the egress pipeline is the one that decides how to forward such a packet. We can see that this packet is replicated to the interface on which it was received and this is something that Dejavu check is protecting us against as we should never forward/flood packet out of the interface on which it was received. So at this point we have a result of the QoS lookup stage and the Forwarding lookup stage in the egress pipeline. This packet will go through all the remaining lookup stages and all the results will be accumulated and delivered to the Egress Global Resolution block [EGR] where the final decision is derived. And this decision is to drop this packet, in other words not forward it out of the Gi1/0/1 interface because of the Dejavu check. All in all, this packet never leaves the Gi1/0/1 interface on which it was received which is also why we do not see a high output rate on this interface in the show interface output. However before this final decision to drop the packet is reached it goes through all the lookup stages in the egress pipeline and as a result is subject to the QoS policy map that is applied in egress direction on Gi1/0/1 which is also why we see high conformed/exceeded rate in the show policy-map interface output. This is an expected behavior as per UADP ASIC design architecture