Gigabit PPPoE (FTTH) low throughput on ESXi

Yes, try to upload this package to your VyOS VM and install

wget https://dev.packages.vyos.net/repositories/current/pool/main/l/linux-5.10.7-amd64-vyos/linux-tools-5.10.7-amd64-vyos_5.10.7-1_amd64.deb
sudo dpkg -i linux-tools-5.10.7-amd64-vyos_5.10.7-1_amd64.deb

Then run sudo perf top and also run bandwidth test. Will be cool to see screencast like you did.

The following packages have unmet dependencies:
 linux-tools-5.10.7-amd64-vyos : Depends: libc6-i386 (>= 2.7) but it is not installable
                                 Depends: libc6-x32 (>= 2.16) but it is not installable

Seems like the dependencies are missing from the repos

I manually downloaded the missing dependencies from debian repos:

Set the ff on each rx-*/rps_cpu. Depends on ho0w many CPU cores you have, you can increase it, but leave at least 1 core per NUMA domain for handling the interrupt (MSI-X). You can use the already mapped CPUs from your /proc/interrupts map, add the decimal values for each core and convert to hex, that’s the mask you want to set.

Hi @hagbard,

I posted my /proc/interrupts here → Gigabit PPPoE (FTTH) low throughput on ESXi - #14 by Matwolf

If I understand correctly, I should try to avoid the core that is handling the interrupts for every queue of the network card. Right?

So, if for eth1 I have:

          CPU0       CPU1       CPU2       CPU3
68:          0          0          0   27518809   PCI-MSI 9961472-edge      eth1-rxtx-0
69:     680727          0          0          0   PCI-MSI 9961473-edge      eth1-rxtx-1
70:          0     821889          0          0   PCI-MSI 9961474-edge      eth1-rxtx-2
71:          0          0     637799          0   PCI-MSI 9961475-edge      eth1-rxtx-3
72:          0          0          0          0   PCI-MSI 9961476-edge      eth1-event-4

I should set the bitmasks the following way:

echo "7" > /sys/class/net/eth1/queues/rx-0/rps_cpus
echo "e" > /sys/class/net/eth1/queues/rx-1/rps_cpus
echo "d" > /sys/class/net/eth1/queues/rx-2/rps_cpus
echo "b" > /sys/class/net/eth1/queues/rx-3/rps_cpus

Right?

You can also include other cores. e.g. core 0,1,2,3 = bin(00001111) or hex 0x0f = f (1+2+4+8)

Mmm… then I don’t understand what you mean by

I thought I had to leave one core free into the bitmask :sweat_smile:
The VM has only 4 vCPU.

Thanks

I didn’t want to confuse you, your calculation looks correct, I just wanted to make sure you know how to calculate it.

1 Like

I made another test.

Same hardware, no Esxi, bare metal Vyos (live) with RPS disabled on the interfaces…

Single core usage at 65% avg and maximum achievable performances.

Now, I was using a VyOS live from an USB key, without any service (openvpn, wireguard, dhcp-server, ecc.) only pppoe configured… but I’m starting to think that there is some issue somewhere…

Is it possible that Esxi is limiting my VyOS VM so much? Or maybe some other configuration that I’ve missed on the hypervisor side?

Do you have a chance to test with passthrough HW interface to VyOS on ESXi?

Unfortunately SR-IOV isn’t supported by that hardware :pensive:

You do not need SR-IOV, just set interface as passthrough directly to VM

Even “simple” passthrough isn’t supported it seems