iPerf is a tool designed to test the bandwidth between two hosts using the network. It is a really simple, powerful CLI which allows generating traffic / load TCP or UDP between 2 hosts. You could use to measure the maximum bandwidth of the network between a client and a server. It can be used to do stress tests of the Ethernet, Wi-Fi or of your ISP.

iPerf2 vs iPerf3, what is the difference?

There has been different versions of this tool in the last years. It started ttcp from the National Laboratory for Applied Network Research (NLANR), and then it was developed iPerf (iPerf 2). iPerf3 is a new implementation from scratch, with the goal of a smaller, simpler code base, and a library version of the functionality that can be used in other programs. The functionality between iPerf2 and iPerf3 are mostly compatible, however you should know they use different ports by default. In iPerf2, the default port is 5001, and in iPerf3, 5201.

iPerf3 adds additional features, such as reports of TCP retransmit info, and the verbose mode gives a lot of useful information on CPU usage…etc. Also, it has a better implementation to perform UDP tests. Finally, the code of iPerf3 is smaller, and it is optimized.

In the tests done in this post, we are going to use iPerf3.

Testing the network bandwidth with TCP

To run a quick bandwidth test, first you have to identify the server node, and the client node.

Server Node

The server node is listening for new connections. In our case, using iPerf3, the listening port is 5201.

To leave the iPerf server running:

iperf3 -s

Client Node

Now you can start a client in which you are going to test the Upload performance from the client to the server (Client -> Server).

iperf3 -c <SERVER IP> -i 1 -t 20

In this case, I have added the -i option, to have report information each 1 seconds, and the -t parameter to establish the length of the test, in our case 20 seconds.

root|lavrea:~$ iperf3 -c 159.65.5.190 -i 1 -t 20
Connecting to host 159.65.5.190, port 5201
[  5] local 167.172.3.43 port 49756 connected to 159.65.5.190 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   214 MBytes  1.80 Gbits/sec  13018   55.1 KBytes       
[  5]   1.00-2.00   sec   196 MBytes  1.65 Gbits/sec  13665    250 KBytes       
[  5]   2.00-3.00   sec   254 MBytes  2.13 Gbits/sec  19770   42.4 KBytes       
[  5]   3.00-4.00   sec   184 MBytes  1.54 Gbits/sec  11308    188 KBytes       
[  5]   4.00-5.00   sec   241 MBytes  2.02 Gbits/sec  18500   48.1 KBytes       
[  5]   5.00-6.00   sec   188 MBytes  1.57 Gbits/sec  13761   56.6 KBytes       
[  5]   6.00-7.00   sec   235 MBytes  1.97 Gbits/sec  12436    245 KBytes       
[  5]   7.00-8.00   sec   205 MBytes  1.72 Gbits/sec  13587    318 KBytes       
[  5]   8.00-9.00   sec   229 MBytes  1.92 Gbits/sec  14720    191 KBytes       
[  5]   9.00-10.00  sec   229 MBytes  1.92 Gbits/sec  12992    188 KBytes       
[  5]  10.00-11.00  sec   241 MBytes  2.02 Gbits/sec  15559   19.8 KBytes       
[  5]  11.00-12.00  sec   238 MBytes  1.99 Gbits/sec  13002   69.3 KBytes       
[  5]  12.00-13.00  sec   211 MBytes  1.77 Gbits/sec  18426   31.1 KBytes       
[  5]  13.00-14.00  sec   212 MBytes  1.78 Gbits/sec  12496   33.9 KBytes       
[  5]  14.00-15.00  sec   200 MBytes  1.68 Gbits/sec  16435    191 KBytes       
[  5]  15.00-16.00  sec   225 MBytes  1.89 Gbits/sec  12645   35.4 KBytes       
[  5]  16.00-17.00  sec   212 MBytes  1.78 Gbits/sec  17291   53.7 KBytes       
[  5]  17.00-18.00  sec   212 MBytes  1.78 Gbits/sec  13723   48.1 KBytes       
[  5]  18.00-19.00  sec   238 MBytes  1.99 Gbits/sec  11413   70.7 KBytes       
[  5]  19.00-20.00  sec   212 MBytes  1.78 Gbits/sec  13865   46.7 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-20.00  sec  4.27 GBytes  1.84 Gbits/sec  288612             sender
[  5]   0.00-20.01  sec  4.27 GBytes  1.83 Gbits/sec                  receiver

iperf Done.

In the client report, we can see in this case our TCP bandwidth is 1.84 Gbits/sec.

Testing the network bandwidth with UDP

TCP is a protocol which automatically do rate-limiting to adapt to the available bandwidth a connection can handle, so it’s difficult to truly stress a connection using TCP. For a more stress test, you can use the UDP protocol instead.

Server Node

With iPerf3, it is not needed to change anything in the server side from the TCP test.

iperf3 -s

Client Node

In the client side, it is needed to add the option -u to use UDP. Besides, it is needed to define the bandwidth with the parameter -b (by default it is 1 Mbps). In our case, I am going to set 10G (10 Gbps).

iperf3 -c <SERVER IP> -u -i 1 -t 20 -b 10G

And here is the test:

root|lavrea:~$ iperf3 -c 159.65.5.190 -u -i 1 -t 20 -b 10G
Connecting to host 159.65.5.190, port 5201
[  5] local 167.172.3.43 port 47554 connected to 159.65.5.190 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec   198 MBytes  1.66 Gbits/sec  143338  
[  5]   1.00-2.00   sec   175 MBytes  1.47 Gbits/sec  126975  
[  5]   2.00-3.00   sec   189 MBytes  1.59 Gbits/sec  137109  
[  5]   3.00-4.00   sec   147 MBytes  1.24 Gbits/sec  106805  
[  5]   4.00-5.00   sec   170 MBytes  1.43 Gbits/sec  123358  
[  5]   5.00-6.00   sec   218 MBytes  1.83 Gbits/sec  158036  
[  5]   6.00-7.00   sec   150 MBytes  1.26 Gbits/sec  108566  
[  5]   7.00-8.00   sec   190 MBytes  1.60 Gbits/sec  137854  
[  5]   8.00-9.00   sec   185 MBytes  1.55 Gbits/sec  133897  
[  5]   9.00-10.00  sec   203 MBytes  1.70 Gbits/sec  146706  
[  5]  10.00-11.00  sec   223 MBytes  1.87 Gbits/sec  161391  
[  5]  11.00-12.00  sec   203 MBytes  1.70 Gbits/sec  146963  
[  5]  12.00-13.00  sec   201 MBytes  1.69 Gbits/sec  145463  
[  5]  13.00-14.00  sec   188 MBytes  1.57 Gbits/sec  135785  
[  5]  14.00-15.00  sec   218 MBytes  1.83 Gbits/sec  158091  
[  5]  15.00-16.00  sec   195 MBytes  1.64 Gbits/sec  141318  
[  5]  16.00-17.00  sec   192 MBytes  1.61 Gbits/sec  139124  
[  5]  17.00-18.00  sec   199 MBytes  1.67 Gbits/sec  144444  
[  5]  18.00-19.00  sec   192 MBytes  1.61 Gbits/sec  139256  
[  5]  19.00-20.00  sec   203 MBytes  1.71 Gbits/sec  147310  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-20.00  sec  3.75 GBytes  1.61 Gbits/sec  0.000 ms  0/2781789 (0%)  sender
[  5]   0.00-20.00  sec  3.74 GBytes  1.61 Gbits/sec  0.003 ms  4889/2781789 (0.18%)  receiver

iperf Done.

In this case, the UDP bandwidth has a bitrate of 1.61 Gbits/sec.

Testing with public iPerf servers

In order to check an Internet connection, you could use a public iPerf server. In the official iPerf website, there is a list of available servers.

Basic commands you should know

Server Node

Start the server in Daemon mode

iperf3 -s -D

Define a different port

iperf3 -s -p 5003

Add a log file with timestamps

iperf3 -s -D --logfile /var/log/iperf.log --timestamps

Add the verbose option

iperf3 -s -V

Client Node

Set the interval reports

Run a 30 seconds test, giving results each 2 seconds

iperf3 -c <SERVER IP> -i 2 -t 30

Reverse direction

Run a test in the direction SERVER -> CLIENT. By default, iPerf sends the traffic in the other direction (CLIENT -> SERVER). Specially interesting to compare uplink vs downlink.

iperf3 -c <SERVER IP> -r -i 1 -t 30

Parallel Streams and change of TCP window size

Run a test with 4 parallel streams, and with a TCP window size of 32768K.

iperf3 -c <SERVER IP> -i 1 -t 20 -w 32768 -P 4

Conclusion

iPerf3 is a very interesting tool with a lot of different options of configuration which can help us to measure the bandwidth of our nodes.