VyOS VM and poor routing performance - Solved - Nothing wrong with VyOS - Hardware offloading wasn't enabled on a Edgerouter X in-between

Hello,
I’m trying to figure out what’s causing the poor routing performance I observe in my VyOS VMs.
I have 2 VyOS VMs running on a ESXi 6.7 host.
One VyOS VM is at 1.1.8 and the other is at 1.3.0 rolling latest.
I installed the second one (1.3.0) as an attempt to troubleshoot the routing performance issue I first observed in 1.1.8.

It’s a very simple setup. Both VyOS VMs have 2 network interfaces.
One interface, eth0, is attached to my ‘mgmt’ port group where I have the uplink of my physical server.
The other interface, eth1, is attached to a port group with no uplink and where I have all my VMs.

When I run a iperf between my pc and eth0 (no routing) I get 1 Gb/s which is the max speed of my physical ESXi host network adapter, no problem.
When I run a iperf between my pc and eth1 (routing eth0 <> eth1) I get not more than 350 Mbps, and that I don’t understand.

I played with hardware settings, adding/removing cpu/ram.
I played with hardware offloading settings, enabling/disabling LRO, TSO.
I can’t get anywhere close to 1 Gbps when I route traffic between eth0 and eth1.

Any ideas?

You mention playing with hardware, but what Network Cards are you using? Have you tried using vmxnet3 cards, instead of whatever the default is in ESXi (I’m going to assume E1000 but I don’t have much experience with ESXi)

Have you investigated MTU, is it possible there’s an MTU issue somewhere causing a lot of fragmentation?

I run a lot of VyOS in ESXi VMs, both for playing and for production.

I do 10Gbps+ routing/NAT/firewalling without issues.

What does your VM look like for vCPUs, RAM, and type of vNICs?

1 Like

Both VyOS VMs have VMXNET 3 network adapaters.
I haven’t checked MTU, but I didn’t tweak that part in VyOS so I guess it’s using the default 1500 bytes MTU. In ESXi all my switches are distributed switches with default 1500 bytes MTU (and mac learning is enabled btw).

One VyOS VM is configured with 2 vCPU and 1 GB of RAM.
The other VyOS VM is configured with 1 vCPU and 512 MB of RAM.
Outcome is the same, can’t get more than 350 Mbps when routing.
I use them in a lab at home so there’s not much traffic.
I just noticed that “Hardware virtualization” isn’t enabled on the VyOS VM, I’ll see if that makes any difference.

Edit: Enabling hardware virtualization on the VyOS VM doesn’t seem to change anything. I haven’t changed offloading settings in VyOS though, will do that later.

Iperf from my pc:
192.168.5.3 is eth0 on VyOS
192.168.160.3 is eth1 on VyOS

C:\Users\Lucien\Desktop>iperf-2.0.14a-win.exe -c 192.168.5.3 -p 5001 -f m -n 1024M

Client connecting to 192.168.5.3, TCP port 5001
TCP window size: 0.06 MByte (default)

[368] local 192.168.5.100 port 57866 connected with 192.168.5.3 port 5001
[ ID] Interval Transfer Bandwidth
[368] 0.0- 9.1 sec 1024 MBytes 943 Mbits/sec

C:\Users\Lucien\Desktop>iperf-2.0.14a-win.exe -c 192.168.160.3 -p 5001 -f m -n 1024M

Client connecting to 192.168.160.3, TCP port 5001
TCP window size: 0.50 MByte (default)

[344] local 192.168.5.100 port 57914 connected with 192.168.160.3 port 5001
[ ID] Interval Transfer Bandwidth
[344] 0.0-25.0 sec 1024 MBytes 344 Mbits/sec

Hello I solved my issue.

VyOS VMs had nothing to do with the poor routing performance. I apologize for the wasted time and bad publicity on the forum. I will modify the title accordingly.

I have a Edgerouter X between my PC and my ESXi host and a static route sending traffic destinated to the VM behind the VyOS’s eth1 interface.

Hardware offloading wasn’t enabled on the Edgerouter X, I enabled it and I now get 1 Gbps to the VyOS eth1 network.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.