...
Cat9k switches inject CPU generated traffic into the priority queue and this has been the behavior since day-1 This behavior is documented in the QoS Whitepaper in the To-CPU and From-CPU packets section https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9000/white-paper-c11-742388.html Packets generated by the CPU are sent directly to the egress queues. When you define a queuing policy on a port, control packets are mapped to a queue with the following order: 1. The highest-level priority queue is always chosen first. 2. In the absence of a priority queue, queue 0 is selected. If the FNF exporter is configured to export the application table and the application attribute templates frequently this extra traffic entering the priority queue could potentially overwhelm the priority queue and output drops could be seen as a result flow exporter destination transport udp 6007 export-protocol ipfix option interface-table timeout 10 option vrf-table timeout 10 option sampler-table option application-table timeout 10 <==== option application-attributes timeout 10 <====
Catalyst 9300/9400/9500/9600 series switches FNF exporter configured to export large templates [such as the application table and the application attribute] egress interface towards the NetFlow collector is a low-bandwidth interface [100Mbps or lower] egress interface towards the NetFlow collector has a QoS policy applied where the priority queue has a very small queue-buffers ratio configured and as a result has a very small amount of buffer space available the small amount of buffer space together with the low-bandwidth interface means that the priority queue is not able to handle the burst of traffic happening when the FNF exporter needs to export those large templates
increase the interface bandwidth, if possible [the problem is seen when the FNF collector is connected via 100M interface but not when connected via 1G or faster interface] modify the QoS policy, if possible so that the priority queue can utilize more buffer space from the common pool
this issue is being addressed by moving away the FNF generated traffic that is being injected by CPU from the priority queue into the non-priority queue with the highest bandwidth available combine this with the use of the qos queue-softmax-multiplier to increase the buffer space available for this non-priority queue so that it can absorb this burst of data coming from the FNF exporter