
OPERATIONAL DEFECT DATABASE
...

...
The asynchronous version of the ESX/ESXi 4.x and ESXi 5.0/5.1 igb driver uses VMware's NetQueue technology to enable Intel Virtual Machine Device Queues (VMDq) support for Ethernet devices based on the Intel 82576 and 82580 Gigabit Ethernet Controllers. VMDq is optional and disabled by default.
Enabling VMDq To enable VMDq: Ensure the correct version of the driver is installed and enabled to load automatically at boot:# esxcfg-module -e igbNotes: Currently this is supported on Intel’s ESX/ESXi 4.x and ESXi 5.0/5.1 drivers (igb version 400.2.4.10 and higher) for 82576 and 82580 Gigabit Ethernet Controllers. This method does not work for ESXi 5.5.x. Intel has discontinued VMDq in the igb driver versions 4.2.16.8 and newer for ESXi 5.5. for more information, see Discontinuing NetQueue Support in the igb Driver Since ESXi 5.5 and Later (2090693). Set per each port the optional VMDq load parameters for the igb module. Configure IntMode=2. Setting a value of 2 for this option specifies using MSI-X, which enables the Ethernet controller to direct interrupt messages to multiple processor cores. MSI-X must be enabled in order for NetQueue to work with VMDq. Set a value for the parameter to indicate the number of transmit and receive queues. The parameter value ranges from 1 to 8 since Intel 82576 and 82580 based network devices provide a maximum of 8 transmit queues and 8 receive queues per port. The value used sets both the transmit and receive queues to the same number.For a quad-port adapter, the following configuration turns on VMDq in full on all four ports:# esxcfg-module -s "IntMode=2,2,2,2 VMDQ=8,8,8,8" igbThe VMDq configuration is flexible. Systems with multiple ports are enabled and configured by comma-separated lists. The values are applied to the ports in the order they are enumerated on the PCI bus.For example:# esxcfg-module -s IntMode=0,0,2,2, ... ,2,2 VMDQ=1,1,8,8, ... ,4,4 igbShows: The values configured for ports 1 and 2 are: IntMode=0 and VMDQ=1 The values configured for ports 3 and 4 are: IntMode=2 and VMDQ=8 The values configured for the last two ports are: IntMode=2 and VMDQ=4 Reboot the ESX host. Limitation Notes: With standard sized Ethernet packets (MTU = 1500 or less), the maximum number of ports supported in VMDq mode is 8, with each port using 8 transmit and receive queues. When using Jumbo Frames (MTU between 1500 and 9000) and VMDq, the maximum number of supported ports is 4, and also the number of transmit and receive queues per port must be reduced to 4. Verifying that VMDq is enabled To verify that VMDq is enabled: Check the options configured for the igb module: # esxcfg-module -g igb The output should appear similar to:igb enabled = 1 options = 'IntMode=2,2,2,2,2,2,2,2 VMDQ=8,8,8,8,8,8,8,8' The enabled value must equal 1, which indicates the igb module will load automatically. IntMode and VMDQ must be set for each port. The example above shows a configuration with 8 ports, where all interfaces are configured in full VMDq mode. Determine which ports use the igb driver using esxcfg-nics. Confirm the driver successfully claimed all supported devices present in the system (enumerate them using lspci and compare the list with the output of esxcfg-nics -l). Query the statistics on each interface using ethtool. If VMDq has been enabled successfully, statistics for multiple transmit and receive queues are shown (see tx_queue_0 through tx_queue_7 and rx_queue_0 through rx_queue_7 in the example below). # esxcfg-nics -lName PCI Driver Link Speed Duplex MAC Address MTU Description vmnic0 04:00.00 bnx2 Up 1000Mbps Full xx:xx:xx:xx:xx:xw 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-Tvmnic1 08:00.00 bnx2 Down 0Mbps Half xx:xx:xx:xx:xx:xx 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-Tvmnic2 0d:00.00 igb Up 1000Mbps Full xx:xx:xx:xx:xx:xy 1500 Intel Corporation 82576 Gigabit Network Connectionvmnic3 0d:00.01 igb Up 1000Mbps Full xx:xx:xx:xx:xx:xz 1500 Intel Corporation 82576 Gigabit Network Connectionvmnic4 0e:00.00 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x1 1500 Intel Corporation 82576 Gigabit Network Connectionvmnic5 0e:00.01 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x2 1500 Intel Corporation 82576 Gigabit Network Connectionvmnic6 10:00.00 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x3 1500 Intel Corporation 82580 Gigabit Network Connectionvmnic7 10:00.01 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x4 1500 Intel Corporation 82580 Gigabit Network Connectionvmnic8 10:00.02 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x5 1500 Intel Corporation 82580 Gigabit Network Connectionvmnic9 10:00.03 igb Up 1000Mbps Full xx:xx:xx:xx:xx:x6 1500 Intel Corporation 82580 Gigabit Network Connection# lspci | grep -e 82576 -e 825800d:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)0d:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)0e:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)0e:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)10:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)10:00.1 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)10:00.2 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)10:00.3 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)# ethtool -S vmnic6NIC statistics:rx_packets: 0tx_packets: 0rx_bytes: 0tx_bytes: 0rx_broadcast: 0tx_broadcast: 0rx_multicast: 0tx_multicast: 0multicast: 0collisions: 0rx_crc_errors: 0rx_no_buffer_count: 0rx_missed_errors: 0rx_aborted_errors: 0tx_carrier_errors: 0tx_window_errors: 0tx_abort_late_coll: 0tx_deferred_ok: 0tx_single_coll_ok: 0tx_multi_coll_ok: 0tx_timeout_count: 0rx_long_length_errors: 0rx_short_length_errors: 0rx_align_errors: 0tx_tcp_seg_good: 0tx_tcp_seg_failed: 0rx_flow_control_xon: 0rx_flow_control_xoff: 0tx_flow_control_xon: 0tx_flow_control_xoff: 0rx_long_byte_count: 0tx_dma_out_of_sync: 0tx_smbus: 0rx_smbus: 0dropped_smbus: 0rx_errors: 0tx_errors: 0tx_dropped: 0rx_length_errors: 0rx_over_errors: 0rx_frame_errors: 0rx_fifo_errors: 0tx_fifo_errors: 0tx_heartbeat_errors: 0tx_queue_0_packets: 0tx_queue_0_bytes: 0tx_queue_0_restart: 0tx_queue_1_packets: 0tx_queue_1_bytes: 0tx_queue_1_restart: 0tx_queue_2_packets: 0tx_queue_2_bytes: 0tx_queue_2_restart: 0tx_queue_3_packets: 0tx_queue_3_bytes: 0tx_queue_3_restart: 0tx_queue_4_packets: 0tx_queue_4_bytes: 0tx_queue_4_restart: 0tx_queue_5_packets: 0tx_queue_5_bytes: 0tx_queue_5_restart: 0tx_queue_6_packets: 0tx_queue_6_bytes: 0tx_queue_6_restart: 0tx_queue_7_packets: 0tx_queue_7_bytes: 0tx_queue_7_restart: 0rx_queue_0_packets: 0rx_queue_0_bytes: 0rx_queue_0_drops: 0rx_queue_0_csum_err: 0rx_queue_0_alloc_failed: 0rx_queue_1_packets: 0rx_queue_1_bytes: 0rx_queue_1_drops: 0rx_queue_1_csum_err: 0rx_queue_1_alloc_failed: 0rx_queue_2_packets: 0rx_queue_2_bytes: 0rx_queue_2_drops: 0rx_queue_2_csum_err: 0rx_queue_2_alloc_failed: 0rx_queue_3_packets: 0rx_queue_3_bytes: 0rx_queue_3_drops: 0rx_queue_3_csum_err: 0rx_queue_3_alloc_failed: 0rx_queue_4_packets: 0rx_queue_4_bytes: 0rx_queue_4_drops: 0rx_queue_4_csum_err: 0rx_queue_4_alloc_failed: 0rx_queue_5_packets: 0rx_queue_5_bytes: 0rx_queue_5_drops: 0rx_queue_5_csum_err: 0rx_queue_5_alloc_failed: 0rx_queue_6_packets: 0rx_queue_6_bytes: 0rx_queue_6_drops: 0rx_queue_6_csum_err: 0rx_queue_6_alloc_failed: 0rx_queue_7_packets: 0rx_queue_7_bytes: 0rx_queue_7_drops: 0rx_queue_7_csum_err: 0rx_queue_7_alloc_failed: 0 Disabling VMDq To disable VMDq: To return the igb driver to default (non-VMDq) mode, erase the optional VMDq load parameters:#esxcfg-module -s "" igb Reboot the ESX host.
Click on a version to see all relevant bugs
VMware Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.