Loading...
Loading...
This article provides detailed instructions for using the iperf tool and the DD net iperf command to measure network throughput between a media server and a Data Domain (DD) system, or between two DD systems. The iperf utility is an industry-standard tool designed to measure raw network performance between two endpoints. It helps validate the underlying NIC-to-NIC bandwidth across all intermediate network components—such as switches, routers, firewalls, and WAN links—using TCP or UDP traffic. This is critical for isolating network-related issues from application-level problems. You may need to run iperf when experiencing any of the following symptoms: Slow backup or restore performance using CIFS, NFS, or DD Boost over IP between the media server and DD.Backup or restore failures over CIFS, NFS, or DD Boost between the media server and DD.Replication performance issues (for example, collection, directory, MTree replication, or Managed File Replication) between two DD systems.
Performance issues during backup, restore, or replication often stem from network bottlenecks rather than application-level limitations. These bottlenecks can occur anywhere along the data path—between the media server and the Data Domain system or between two DD systems—and may include: Bandwidth limitations on NICs or intermediate network devices (switches, routers, firewalls).MTU mismatches causing fragmentation and retransmissions.High latency or packet loss due to congestion, faulty cables, or misconfigured QoS.TCP window scaling issues or insufficient buffer sizes impacting throughput.Firewall or IDS/IPS inspection overhead throttling SMB/NFS/DDBoost traffic. Because these issues are often invisible at the application layer, iperf is used to measure raw TCP/UDP throughput between endpoints, validating the underlying network performance independent of CIFS, NFS, or DD Boost. This helps isolate whether slow backups/restores are caused by network constraints or by application/storage configuration.
Iperf is a widely used, open-source network performance testing utility that generates TCP and UDP traffic streams to measure available bandwidth and throughput. It provides an accurate assessment of raw network capacity between two endpoints, such as a media server and a Data Domain (DD) system, or between two DD systems. By simulating data transfer at the transport layer, iperf helps identify network bottlenecks, latency issues, and packet loss across intermediate components like switches, routers, and firewalls—independent of application-level protocols such as CIFS, NFS, or DD Boost. Iperf has two modes: server and client.SECTION I: WHERE TO OBTAIN THE IPERF TOOL:There are three ways to get the iperf executable file:Method 1:DD has "net iperf" command. If you just want to test bandwidth between two DDs, use "net iperf" command is enough.Method 2:The Iperf tool is available on DDR under the /ddr/var/tools/iperf folder. To obtain the iperf executable please map the /ddvar either using CIFS or NFS on your media server, and put it under any folder, say /tmp, or C:\EMC, or C:\ddtools. 1. To create temporary CIFS share for /ddvar where the iPerf utility for Windows: cifs share create <Share Name is recommended to be 'ddvar' without any slash> path <Path to the directory being shared '/ddvar'> clients <IP of relevant remote Windows Host> Eg: sysadmin@dd# cifs share create ddvar path /ddvar clients <IP of affected backup Host> To remove temporary CIFS share: cifs share destroy <Temporary Share Name> Eg: sysadmin@dd# cifs share destroy ddvar 2. To create temporary NFS Share for Linux: nfs export create <Share Name is recommended to be 'ddvar' without any slash> path <Path to the directory being shared '/ddvar' clients <IP of relevant remote Linux Host> Eg: sysadmin@dd# nfs export create ddvar path /ddvar clients <IP of relevant remote Linux Host> To remove temporary NFS share: nfs export destroy <Temporary Share Name> Eg: sysadmin@dd# nfs export destroy ddvar Method 3:Download from DD using scp (UNIX) or pscp (Windows):UNIX: scp <localuser>@<IP or hostname of DD>:/ddr/var/tools/iperf/<OS>/<iperf executable> <local path> example: scp sysadmin@10.10.10.10:/ddr/var/tools/iperf/Linux/iperf /tmp/iperf OS/iperf executable: HP-UX_RISC/iperf AIX/iperf Linux/iperf HP-UX_IA64/iperf Solaris_Sparc/iperf Windows/iperf.exe Windows: pscp -scp <localuser>@<IP or hostname of DD>:/ddr/var/tools/iperf/Windows/iperf.exe <local path> example: pscp -scp sysadmin@10.10.10.10:/ddr/var/tools/iperf/Windows/iperf.exe C:\ddtools\iperf.exe Note: pscp (command-line version of scp for windows) is available for download from putty.org (external). Winscp does not work to download from DD. SECTION II: Find IP address on DD to be used in iperf test.#net show hardware, to see which port, 1G/10G link#net show setting, to see the IP assigned to the portSECTION III: HOW TO RUN IPERFEXAMPLE I: HOW TO RUN IPERF BETWEEN TWO DDs:On destination DD, using putty session, #net iperf server Then on source DD, #net iperf client <DestinationDDIP> interval 10 duration 60 Note: -After the test, use ctl+c on source and destination DD, to stop iperf. #net iperf server status, to confirm iperf has been stopped.-Above command is to see what network bandwidth is available from source DD to Destination DD, with 1 stream, with 10 sec interval for 60 seconds.-You can run the above command with "connection 10" to test the network bandwidth with 10 connections, which should be greater than 1 connection.-If there are replication going between these 2 DDs at the same time, the iperf result shows the network bandwidth left besides what is being used already by replication. You can use #iostat 2 to see throughput used on the port at the same time.-You can reverse the test, to see the available bandwidth the other way around, ie. between destination to source DD.. EXAMPLE II: HOW TO RUN IPERF FROM MEDIA SERVER TO DD. This can be used for slow backup/write issue. On DD, using putty session, #net iperf server --- This means DD in server mode and is listening On media server, Windows or Linux: #iperf -c <DDIPaddress> -t 60 -i 10 Note: -After the test, use ctl+c on source to stop iperf.. #net iperf server status, to confirm iperf has been stopped.-You can run above command with -P 10 to test the network bandwidth with 10 connections, which should be greater than 1 connection.-You my need to use option -w 256K, to specify the window size. Example: root@client 1 iperf-2.0.5]# iperf -c 11.65.228.28 -i 3 -t 30 -w 256K ------------------------------------------------------------ Client connecting to 11.65.228.28, TCP port 5001 TCP window size: 512 KByte (WARNING: requested 256 KByte) ------------------------------------------------------------ [ 3] local 11.65.249.45 port 63535 connected with 11.65.228.28 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 3.0 sec 2.79 GBytes 7.98 Gbits/sec [ 3] 3.0- 6.0 sec 2.86 GBytes 8.19 Gbits/sec [ 3] 6.0- 9.0 sec 2.90 GBytes 8.30 Gbits/sec Note: In the above output, .45 is the source IP, used by the media server for writing to DD. .28 is the DD IP used for backup.EXAMPLE III: HOW TO RUN IPERF FROM DD TO MEDIA SERVER: This can be used for slow restore/read issue.On media server, Windows or Linux: #iperf -s On DD: #net iperf client <ip address> duration 60 interval 10 Note: -You may need to use option -p 5001 or other #, to specify the port.-You may need to use option -w 256K, to specify the window size.SECTION IV: NEXT STEPS-Ideally, between 1Gbps ports, we would like to see 800Mbps to 900Mbps throughput when nothing else was using the bandwidth.Between 10Gbps ports, we would like to see a few Gbps throughput when nothing else was using the bandwidth.-When throughput is lower than the above ideal line speed, use multiple connections (10 or 20) to see available bandwidth increase.-Also check routing and make sure MTU consistent all the way. (See notes below, how to use ping to check MTU)
Click on a version to see all relevant bugs
Dell Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.