Wednesday, January 27, 2016

iperf3.. an old friend all shiny and new

So.. for a long time iperf has been a great friend..

You know what it's like when a friend you haven't seen in 18 months shows up for lunch 60 pounds lighter and wearing some snazzy threads you can't help but admire?

That's what iperf3 is like..

Installing via RH EPEL repo on CentOS 6 X86_64 as of today should end up looking like this if you install the optional devel package as well.

I used something like this:

"yum -y install iperf3 iperf3-devel"

Afterwards, "yum list installed | grep iperf" should produce output that looks kind of like this:

iperf3.x86_64           3.0.11-1.el6    @epel
iperf3-devel.x86_64     3.0.11-1.el6    @epel

So.. a little background..

1)  Some of us have multi-homed servers.  I personally wouldn't do anything as stupid as round robin default routing, but I might have some specific persistent static routes, especially useful since I cut a vlan tied directly into the core network, to bypass any sort of packet filtering devices (read: Firewalls) in my efforts to test a circuit.

2)  Static routes on and off a box I supposed I could replace by running Quagga and iBGP but that's over the top for a host that isn't supposed to perform a routing role in the network.

3)  I use a single physical interface on a kvm host in dot1q trunking mode, so to the kvm host, the interfaces are configured as "ethx.", and then those are bound to bridge interfaces on the kvm host.  This is the easy way to maintain as much or as little isolation between the different hosts on the system as well.  If the layer 2 traffic doesn't go there, it can't ever be pcap'd there ;)

Anyway.. back to iperf.

Really simple usage to drop it in server mode on the target host:

"iperf3 -s -fM -V --bind 10.0.0.2"

Really simple usage to fire up the client, bind to a specific address, pointed to the server and run the test, which in this case is specified to max at 5Mb/s of UDP trafffic, and run for 30 seconds.

"iperf3 -c 10.0.0.2 --bind 10.1.0.2 -b5M -u -V -t30"

Simple excerpt from "netstat -rn" verifying routing is going through the network elements I want it to from the kvm guest's second bridged interface (eth1)

10.1.0.0       10.1.0.1       255.255.255.252 UG        0 0          0 eth1

And the end result is pretty dang nice.  I've tested it on TCP successfully at 100Mb, UDP, up to 120 so far.  It will take quite a bit to be able to test at line rate in terms of perhaps dedicated hardware and the willingness to dump 1Gb or 10Gb of traffic on a network, which is a bit more aggressive, but all in all it's pretty nice.

Aside even from PTP (VPL/EVPL/PL, or LAN) circuits, another great use is testing things like IPSEC throughput on a L2L connection.

Great stuff.  What's not to like?

Output from a test run (-R reverses the transfer direction for qualifying transmit and receive paths which can be assymetrical).

[root@iperf network-scripts]# iperf3 -c 10.0.0.2 --bind 10.1.0.2 -b5M -u -V -t30
iperf 3.0.11
Linux iperf.ttcinet.net 2.6.32-573.7.1.el6.x86_64 #1 SMP Tue Sep 22 22:00:00 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Time: Wed, 27 Jan 2016 18:09:59 GMT
Connecting to host 10.0.0.2, port 5201
      Cookie: localhost.localnet.net.1453918199.013581.
[  4] local 10.1.0.2 port 35963 connected to 10.0.0.2 port 5201
Starting Test: protocol: UDP, 1 streams, 8192 byte blocks, omitting 0 seconds, 30 second test
[ ID] Interval           Transfer     Bandwidth       Total Datagrams
[  4]   0.00-1.00   sec   552 KBytes  4.52 Mbits/sec  69
[  4]   1.00-2.00   sec   608 KBytes  4.98 Mbits/sec  76
[  4]   2.00-3.00   sec   616 KBytes  5.05 Mbits/sec  77
[  4]   3.00-4.00   sec   608 KBytes  4.98 Mbits/sec  76
[  4]   4.00-5.00   sec   608 KBytes  4.98 Mbits/sec  76
[  4]   5.00-6.00   sec   616 KBytes  5.05 Mbits/sec  77
[  4]   6.00-7.00   sec   608 KBytes  4.98 Mbits/sec  76
[  4]   7.00-8.00   sec   608 KBytes  4.98 Mbits/sec  76
[  4]   8.00-9.00   sec   616 KBytes  5.05 Mbits/sec  77
[  4]   9.00-10.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  10.00-11.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  11.00-12.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  12.00-13.00  sec   616 KBytes  5.05 Mbits/sec  77
[  4]  13.00-14.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  14.00-15.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  15.00-16.00  sec   616 KBytes  5.05 Mbits/sec  77
[  4]  16.00-17.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  17.00-18.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  18.00-19.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  19.00-20.00  sec   616 KBytes  5.05 Mbits/sec  77
[  4]  20.00-21.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  21.00-22.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  22.00-23.00  sec   616 KBytes  5.05 Mbits/sec  77
[  4]  23.00-24.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  24.00-25.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  25.00-26.00  sec   616 KBytes  5.05 Mbits/sec  77
[  4]  26.00-27.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  27.00-28.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  28.00-29.00  sec   608 KBytes  4.98 Mbits/sec  76
[  4]  29.00-30.00  sec   616 KBytes  5.05 Mbits/sec  77
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
[  4]   0.00-30.00  sec  17.8 MBytes  4.99 Mbits/sec  0.248 ms  5/2282 (0.22%)
[  4] Sent 2282 datagrams
CPU Utilization: local/sender 0.2% (0.0%u/0.2%s), remote/receiver 0.0% (0.0%u/0.0%s)

A few lost packets doesn't look good.. that's something I get to explore and validate where it's *not* occurring.

Hmm.. might be system tuning and virtio under kvm.

Giving credit where credit is due, I found this other link (of course) the day after I wrote this post which does a great job of comparing the old iperf and the new iperf3, and while they are talking about a specific vendor, all the iperf3 and linux stuff works on any linux, and very similar variables are also present in FreeBSD.  OpenSolaris is a bit different, but has most of these tuning options with different defaults.

All in all, very good with a different set of examples.

http://www.exogeni.net/2014/06/lies-damn-lies-and-iperf-dataplane-network-tuning-in-exogeni-today/

For most of us using Linux not at the bleeding edge of network throughput, let's say 200Mb/s or less, it's not likely to need those kernel tuning options but if you're qualifying links, be they LAN or WAN, iperf3 is ridiculously useful now.