...
High latency observed in customer network.
Under certain conditions, particularly under forced test conditions, it is possible to create scenarios where flow lock contention will be very high because of NAT gatekeeper failures. This happens when a large amount of traffic that does not need to be NAT'd is sent through an interface that has NAT configured. Most traffic hitting a NAT interface should be sent through NAT. If not, it will cause issues around 1Gb of traffic.
ASR1000(config)#ip nat service gatekeeper After this, we can manually configure the size of the cache. The recommended starting point is 64K for now. Here is the command to configure the cache to be 64K: ASR1000(config)#ip nat settings gatekeeper-size 64000 From here we can monitor the latency using the ping as before. We can monitor the actual entries in the cache using the following commands: Show platform hardware qfp active feature nat datapath gatein activity Show platform hardware qfp active feature nat datapath gateout activity If needed we can clear the statistics by adding clear to the end of the two commands above to get a better idea of where we are at. Depending on what we are seeing with latency, we can fine tune the cache size later as well to try to reach optimum performance. Changing the cache size should not be service impacting. However, we recommend doing it during a maintenance window to be safe.
The fix was backed out and re-added later by the CSCun06260 bug.
Cisco Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.