Gigabit PPPoE (FTTH) low throughput on ESXi

I’m really struggling to get out the maximum throughput performance out of my VyOS VM.

VyOS is a VM on ESXi 7 host with:
4 vCPU
2GB RAM

The ESXi host has:
“4 CPUs x AMD A4-5050 APU with Radeon™ HD Graphics”
8GB RAM

The line is capable of speeds of about 940/104Mbps down/up.

I can reach easily those speed using a FritzBox router, but I’m finding very hard to get the same speeds using my VyOS VM.

I have eth0 for my LAN and eth1 for the pppoe0 interface (that is my WAN).

After all the changes tried, I cannot reach speeds over 680/104Mbps with VyOS.

Can someone help me to get back the missing 240Mbps in some way? :wink:

I hope it’s not an hardware limitation but only a misconfiguration (or lack of performance tuning).

Thanks

Hello @Matwolf, did you try to do some tuning (enable RPS) Slow PPPoE throughput, high CPU with APU3C4 and VyOS 1.3 Rolling - #2 by Dmitry

Hello @Dmitry, yes I did!

With mitigations=off and without RPS I get 480/104 Mbps.

If I enable RPS with

echo "f" > /sys/class/net/eth0/queues/rx-0/rps_cpus
echo "f" > /sys/class/net/eth1/queues/rx-0/rps_cpus

I get about 650/104 Mbps

We have set interfaces ethernet eth0 offload rps on the latest 1.3 beta and 1.4 release.

Thanks @c-po,
that CLI command is really useful and allows me to remove the config script :wink:

But even using the command, in place of the manual configs, the speeds are the same: no more than 680Mbps download.

(I didn’t say it before, but the upload 104Mbps is the max available for my ISP’s contract, so it’s ok :wink: )

The CLI command actually does the same you do manually, but only via CLI.

@Matwolf did you try passthrough real port to VM, can you show an output

show interfaces ethernet eth0 physical
show interfaces ethernet eth1 physical

Unfortunately I cannot use passthrough.

Here the output:

Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        MDI-X: Unknown
        Supports Wake-on: uag
        Wake-on: d
        Link detected: yes
driver: vmxnet3
version: 1.5.0.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:0b:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        MDI-X: Unknown
        Supports Wake-on: uag
        Wake-on: d
        Link detected: yes
driver: vmxnet3
version: 1.5.0.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:13:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

Ok, can you take a screenshot top and press 1 command when you test bandwidth?

This speedtest was actually one of the bests :wink:

It’s hard to read the numbers with the GIF embedded here, but if you right click and open it in another tab it’s more easy to read ^^’

Seems like only one of the 4 ksoftirqd processes reaches 100%, while 2 stay at about 50% and the last one almost idle.

What physical networking card is in the host?
Let’s say you spin up a Windows 10 VM on the host - then iPerf from LAN -> Windows 10 VM. Do you see similar speeds?

I say this because I have BT 910/140 Mbps - of which I’m running VyOS on Hyper-V Server 2019. Minimal configuration and I can get full speeds from which the ISP supplies with no tweaks inside the VM. Perhaps it is a configuration issue or limitation on the host?

In my host I’ve got an i5-10500T paired with an Intel I350 Quad-Port Gigabit PCIe NIC.

Yes, actual question, which HW network card do you have on this ESXi host?
Could you try to change the bitmask for RPS and also provide an output of the commands sudo cat /proc/interrupts ?

Intel I350 Dual-Port Gigabit

Iperf between a Win10 machine on my LAN and an Ubuntu Server VM on the same host:

iperf3.exe -c 10.80.0.2
Connecting to host 10.80.0.2, port 5201
[  4] local 10.80.100.145 port 56491 connected to 10.80.0.2 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   110 MBytes   922 Mbits/sec
[  4]   1.00-2.00   sec   111 MBytes   935 Mbits/sec
[  4]   2.00-3.00   sec   112 MBytes   940 Mbits/sec
[  4]   3.00-4.00   sec   112 MBytes   935 Mbits/sec
[  4]   4.00-5.00   sec   112 MBytes   938 Mbits/sec
[  4]   5.00-6.00   sec   112 MBytes   939 Mbits/sec
[  4]   6.00-7.00   sec   112 MBytes   936 Mbits/sec
[  4]   7.00-8.00   sec   112 MBytes   937 Mbits/sec
[  4]   8.00-9.00   sec   111 MBytes   934 Mbits/sec
[  4]   9.00-10.00  sec   112 MBytes   936 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec                  receiver

iperf Done.

And reversed

iperf3.exe -c 10.80.0.2 -R
Connecting to host 10.80.0.2, port 5201
Reverse mode, remote host 10.80.0.2 is sending
[  4] local 10.80.100.145 port 56517 connected to 10.80.0.2 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   113 MBytes   949 Mbits/sec
[  4]   1.00-2.00   sec   113 MBytes   947 Mbits/sec
[  4]   2.00-3.00   sec   112 MBytes   935 Mbits/sec
[  4]   3.00-4.00   sec   113 MBytes   949 Mbits/sec
[  4]   4.00-5.00   sec   113 MBytes   945 Mbits/sec
[  4]   5.00-6.00   sec   113 MBytes   948 Mbits/sec
[  4]   6.00-7.00   sec   113 MBytes   948 Mbits/sec
[  4]   7.00-8.00   sec   113 MBytes   949 Mbits/sec
[  4]   8.00-9.00   sec   113 MBytes   948 Mbits/sec
[  4]   9.00-10.00  sec   113 MBytes   949 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.10 GBytes   948 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.10 GBytes   947 Mbits/sec                  receiver

iperf Done.
sudo cat /proc/interrupts
           CPU0       CPU1       CPU2       CPU3
  0:          3          0          0          0   IO-APIC   2-edge      timer
  1:          0          0          0          9   IO-APIC   1-edge      i8042
  8:          1          0          0          0   IO-APIC   8-edge      rtc0
  9:          0          0          0          0   IO-APIC   9-fasteoi   acpi
 12:          0          0          5          0   IO-APIC  12-edge      i8042
 14:          0          0          0          0   IO-APIC  14-edge      ata_piix
 15:          0          0          0          0   IO-APIC  15-edge      ata_piix
 18:          0         64          0          0   IO-APIC  18-fasteoi   uhci_hcd:usb2
 19:          0          0          0          0   IO-APIC  19-fasteoi   ehci_hcd:usb1
 24:          0          0          0          0   PCI-MSI 344064-edge      PCIe PME, pciehp
 25:          0          0          0          0   PCI-MSI 346112-edge      PCIe PME, pciehp
 26:          0          0          0          0   PCI-MSI 348160-edge      PCIe PME, pciehp
 27:          0          0          0          0   PCI-MSI 350208-edge      PCIe PME, pciehp
 28:          0          0          0          0   PCI-MSI 352256-edge      PCIe PME, pciehp
 29:          0          0          0          0   PCI-MSI 354304-edge      PCIe PME, pciehp
 30:          0          0          0          0   PCI-MSI 356352-edge      PCIe PME, pciehp
 31:          0          0          0          0   PCI-MSI 358400-edge      PCIe PME, pciehp
 32:          0          0          0          0   PCI-MSI 360448-edge      PCIe PME, pciehp
 33:          0          0          0          0   PCI-MSI 362496-edge      PCIe PME, pciehp
 34:          0          0          0          0   PCI-MSI 364544-edge      PCIe PME, pciehp
 35:          0          0          0          0   PCI-MSI 366592-edge      PCIe PME, pciehp
 36:          0          0          0          0   PCI-MSI 368640-edge      PCIe PME, pciehp
 37:          0          0          0          0   PCI-MSI 370688-edge      PCIe PME, pciehp
 38:          0          0          0          0   PCI-MSI 372736-edge      PCIe PME, pciehp
 39:          0          0          0          0   PCI-MSI 374784-edge      PCIe PME, pciehp
 40:          0          0          0          0   PCI-MSI 376832-edge      PCIe PME, pciehp
 41:          0          0          0          0   PCI-MSI 378880-edge      PCIe PME, pciehp
 42:          0          0          0          0   PCI-MSI 380928-edge      PCIe PME, pciehp
 43:          0          0          0          0   PCI-MSI 382976-edge      PCIe PME, pciehp
 44:          0          0          0          0   PCI-MSI 385024-edge      PCIe PME, pciehp
 45:          0          0          0          0   PCI-MSI 387072-edge      PCIe PME, pciehp
 46:          0          0          0          0   PCI-MSI 389120-edge      PCIe PME, pciehp
 47:          0          0          0          0   PCI-MSI 391168-edge      PCIe PME, pciehp
 48:          0          0          0          0   PCI-MSI 393216-edge      PCIe PME, pciehp
 49:          0          0          0          0   PCI-MSI 395264-edge      PCIe PME, pciehp
 50:          0          0          0          0   PCI-MSI 397312-edge      PCIe PME, pciehp
 51:          0          0          0          0   PCI-MSI 399360-edge      PCIe PME, pciehp
 52:          0          0          0          0   PCI-MSI 401408-edge      PCIe PME, pciehp
 53:          0          0          0          0   PCI-MSI 403456-edge      PCIe PME, pciehp
 54:          0          0          0          0   PCI-MSI 405504-edge      PCIe PME, pciehp
 55:          0          0          0          0   PCI-MSI 407552-edge      PCIe PME, pciehp
 56:          0          1          0          0   PCI-MSI 2097152-edge      eth3-rxtx-0
 57:          0          0          4          0   PCI-MSI 2097153-edge      eth3-rxtx-1
 58:          0          0          0          0   PCI-MSI 2097154-edge      eth3-rxtx-2
 59:          4          0          0          0   PCI-MSI 2097155-edge      eth3-rxtx-3
 60:          0          0          0          0   PCI-MSI 2097156-edge      eth3-event-4
 61:          0          0      28647          0   PCI-MSI 1097728-edge      ahci[0000:02:03.0]
 62:   32851863          0          0          0   PCI-MSI 5767168-edge      eth0-rxtx-0
 63:          0    4732670          0          0   PCI-MSI 5767169-edge      eth0-rxtx-1
 64:          0          0   18177296          0   PCI-MSI 5767170-edge      eth0-rxtx-2
 65:          0          0          0    1824970   PCI-MSI 5767171-edge      eth0-rxtx-3
 66:          0          0          0          0   PCI-MSI 5767172-edge      eth0-event-4
 67:          0          0       8583          0   PCI-MSI 1572864-edge      vmw_pvscsi
 68:          0          0          0   27518809   PCI-MSI 9961472-edge      eth1-rxtx-0
 69:     680727          0          0          0   PCI-MSI 9961473-edge      eth1-rxtx-1
 70:          0     821889          0          0   PCI-MSI 9961474-edge      eth1-rxtx-2
 71:          0          0     637799          0   PCI-MSI 9961475-edge      eth1-rxtx-3
 72:          0          0          0          0   PCI-MSI 9961476-edge      eth1-event-4
 73:          0          0      56287          0   PCI-MSI 14155776-edge      eth2-rxtx-0
 74:          0          0          0       1200   PCI-MSI 14155777-edge      eth2-rxtx-1
 75:       1271          0          0          0   PCI-MSI 14155778-edge      eth2-rxtx-2
 76:          0        246          0          0   PCI-MSI 14155779-edge      eth2-rxtx-3
 77:          0          0          0          0   PCI-MSI 14155780-edge      eth2-event-4
 78:          0          0          0       6921   PCI-MSI 129024-edge      vmw_vmci
 79:        101          0          0          0   PCI-MSI 129025-edge      vmw_vmci
NMI:        170        404        250        232   Non-maskable interrupts
LOC:     960270    1301585     856376    1204073   Local timer interrupts
SPU:          0          0          0          0   Spurious interrupts
PMI:        170        404        250        232   Performance monitoring interrupts
IWI:          0          2          3         11   IRQ work interrupts
RTR:          0          0          0          0   APIC ICR read retries
RES:       4106       5851       7680       4958   Rescheduling interrupts
CAL:     415411    9576949    3273967    3305263   Function call interrupts
TLB:       4083       1236       2891       2318   TLB shootdowns
TRM:          0          0          0          0   Thermal event interrupts
THR:          0          0          0          0   Threshold APIC interrupts
DFR:          0          0          0          0   Deferred Error APIC interrupts
MCE:          0          0          0          0   Machine check exceptions
MCP:        184        184        184        184   Machine check polls
ERR:          0
MIS:          0
PIN:          0          0          0          0   Posted-interrupt notification event
NPI:          0          0          0          0   Nested posted-interrupt event
PIW:          0          0          0          0   Posted-interrupt wakeup event

Try please also set ring buffers

 set interfaces ethernet eth0 ring-buffer rx 4096
 set interfaces ethernet eth0 ring-buffer tx 4096
 set interfaces ethernet eth1 ring-buffer rx 4096
 set interfaces ethernet eth1 ring-buffer tx 4096

And play with the bitmask

echo "ff" > /sys/class/net/eth0/queues/rx-0/rps_cpus
echo "ff" > /sys/class/net/eth1/queues/rx-0/rps_cpus

Should I play with RPS on every queue?

The network card seems to have 4 queues:

ls /sys/class/net/eth1/queues/
rx-0  rx-1  rx-2  rx-3  tx-0  tx-1  tx-2  tx-3

I see that with

set interfaces ethernet eth1 offload rps

I get “e” on the rx-0 and “0” elsewhere.

Thanks for your help

Try to set f only for rx-0

Unfortunately changing the RPS bitmask form “e” to “f” doesn’t seem to produce perceivable improvements…

To exclude some variables, I connected a PC with iperf on the WAN side (no PPPoE but on the same physical network interface used by PPPoE).

Now I can make speed tests between two machines LAN<->WAN, traversing the VyOS VM (with NAT).

This way I hope we can exclude variability introduced by external speed test servers, pppoe overhead and/or misconfigurations or possible congestions ISP side.

In any case, at the moment, the iperf tests WAN -> LAN are consistent with the tests made until now.
Same speeds.

Incrasing the ring buffers size seems to have improved the situation a bit. Now I can reach more easily speeds of about 750Mbps (almost a 100Mbps improvement).

Do you have chance to install perf utills and run perf top command?
Which kernel version now (uname -a)?

On VyOS VM?

Linux vyos 5.10.7-amd64-vyos #1 SMP Sat Jan 16 12:09:04 UTC 2021 x86_64 GNU/Linux