Loading...
Loading...
On any HPE Synergy Compute Module running VMware ESXi 7.0.3 or VMware ESXi 8.0 (or later), and configured with any of the HPE Synergy network adapters listed in the Scope section below; low throughput is observed in an NSX-T / GENEVE overlay network configuration if both Default Queue Receive Side Scaling (DRSS) and Large Receive Offload (LRO) are enabled.In an overlay network with a GENEVE configuration, low throughput and spikes in CPU usage may be observed for Virtual Machines workloads because Large Receive Offload (LRO) aggregation does not occur in the hardware.Large Receive Offload (LRO) aggregation does not occur in this configuration because GENEVE packets have GENEVE Options present in the packet header. Changes were made in VMware NSX-T, where GENEVE Options are now present in the GENEVE header.Due to a limitation in the network adapter firmware, the qedentv driver does not support TPA (LRO) for packets with GENEVE options present in the header. However, Transparent Packet Aggregation (TPA) / Large Receive Offload (LRO) is supported with GENEVE packets without the Options in the header.Important:This condition does not represent an issue and it is an functioning as expected; this is a limitation of the firmware and driver of the network adapter. The absence of Transparent Packet Aggregation (TPA) / Large Receive Offload (LRO) occurs due to the nature of the system configuration; for example, GENEVE packets with Options.
Any HPE Synergy Compute Module running VMware ESXi 7.0.3 or VMware ESXi 8.0 (or later), and configured with any of the following HPE Synergy network adapters:HPE Synergy 4820C 10/20/25Gb Converged Network Adapter (876449-B21)HPE Synergy 6810C 25/50Gb Ethernet Adapter (867322-B21)HPE Synergy 6820C 25/50Gb Converged Network Adapter (P02054-B21)
Since Large Receive Offload (LRO) is not supported for the configurations that have packets with GENEVE options, Receive Side Scaling (RSS) which is supported by the qedentv network adapter driver, and Default Queue Receive Side Scaling (DRSS) can provide great benefits to improve the performance.Perform the following steps to improve the performance:Enable RSS and DRSS for Virtual Machine workloads:Pass the following driver parameters to the qedentv driver using the esxcfg-module command:Example:esxcfg-module -s "rss_on_defq=1, RSS=4,4,4,4 DRSS=4,4,4,4 num_queues=32,32,32,32" qedentvThe numbers 4,4,4,4 or 32,32,32,32 relate to the number of network ports configured for the network adapter. The above example applies to 4 configured network ports.If the server only has 2 configured ports the command would look as follows:esxcfg-module -s "rss_on_defq=1, RSS=4,4 DRSS=4,4 num_queues=32,32" qedentvThis information can be also found in the VMware Performance Best Practices document (Page 47) available at the following URL:Performance Best Practices for VMware vSphere 7.0, Update 3Change the VMX file (a Virtual Machine reboot is required):Perform the following steps to add entry in VMX file:Select the VM -> Edit Setting ->VM Options -> Advanced -> Edit Configuration -> Add parameters -> (Add the Kay and Value) -> Save (edited).Note:ethernetX.pNicFeatures, (X represents the number of the virtual network card to which the feature should be added), for example : Value "4".Configure the Jumbo Maximum Transmission Unit (MTU) on vNIC (Optional):If 9K MTU is configured on the physical network adapter, configuring 8800 MTU on the vNic interface (Virtual Machine) will provide more benefit.Alternative option:Disabling the driver offload function "geneve_filter_en=0" may greatly improve the network throughput but will led to an increase in the CPU utilization of the VMware ESXi host.Disclaimer:One or more of the links above will take you outside the HPE website. HPE is not responsible for content outside of its domain.
Operating Systems Affected:Not Applicable
Click on a version to see all relevant bugs
Hewlett Packard Enterprise Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.