PPPoE gigabit network, high CPU usage

Hello there,

My Config:

Host config (Proxmox)

4 x Intel® Core™ i3-5005U CPU @ 2.00GHz (1 Socket)

8 Gb RAM

VM config (VyOS)

2 core

512 Mb RAM

My net Speed: 1/1 Gbit PPPoE Optical

If i do a speed test, the max speed is about 600/600 Mbit and the CPU is maxed in VyOS VM. It mean that the hardware is not enough in this bandwith?

If i copy a file from the VM to a another machine (inside the LAN network), i able to use full bandwith (Gigabit) It need to update the hardware or some settings is wrong?

What the experience? similar hardware is possible to use a gigabit PPPoE connection?

Hello @Vamp, can you share screenshot sudo top command and press 1 when speed test running?
I have idea. You can try configure RPS (Receive Packet Steering) manually for this case.
Note: Seems also 2.00GHz CPU not enough for this case.

Hello @Dmitry ,

Here the picture (some minute ago i set 4 core, it is the reason the 4 CPU)

I hope you use network interface eth0

try set next

sudo su -l
echo "32768" > /proc/sys/net/core/rps_sock_flow_entries

echo "2" > /sys/class/net/eth0/queues/rx-0/rps_cpus
echo "4" > /sys/class/net/eth0/queues/rx-1/rps_cpus
echo "8" > /sys/class/net/eth0/queues/rx-2/rps_cpus
echo "1" > /sys/class/net/eth0/queues/rx-3/rps_cpus

echo "2048" > /sys/class/net/eth0/queues/rx-0/rps_flow_cnt
echo "2048" > /sys/class/net/eth0/queues/rx-1/rps_flow_cnt
echo "2048" > /sys/class/net/eth0/queues/rx-2/rps_flow_cnt
echo "2048" > /sys/class/net/eth0/queues/rx-3/rps_flow_cnt
exit

If you see any warnings, check please

ls /sys/class/net/eth0/queues/

Run speed test again and also provide please screenshot with sudo top command and output of commands

sudo cat /proc/softirqs
sudo ethtool -g eth0

Hello @Vamp, do you have any results? As for me, this case very interesting.

@Dmitry I can not test it, it a prodictive system but I try it at this weekend!

@Dmitry

I got warning this lines (missing file or directory)

echo "4" > /sys/class/net/eth0/queues/rx-1/rps_cpus
echo "8" > /sys/class/net/eth0/queues/rx-2/rps_cpus
echo "1" > /sys/class/net/eth0/queues/rx-3/rps_cpus


echo "2048" > /sys/class/net/eth0/queues/rx-1/rps_flow_cnt
echo "2048" > /sys/class/net/eth0/queues/rx-2/rps_flow_cnt
echo "2048" > /sys/class/net/eth0/queues/rx-3/rps_flow_cnt

I run this command:

root@vyos:~# ls /sys/class/net/eth0/queues/
rx-0  tx-0

And this command show this:

root@vyos:~# sudo cat /proc/softirqs
                    CPU0       CPU1       CPU2       CPU3
          HI:          0          0          1          0
       TIMER:    3707338    3040568    1230683    2733274
      NET_TX:     347467     127506      93146      10768
      NET_RX:   11912778   20839712      98250     100144
       BLOCK:          0          0          0          0
    IRQ_POLL:          0          0          0          0
     TASKLET:         11         10          1        672
       SCHED:    2619571    2327082     985307    1551294
     HRTIMER:          0          0          0          0
         RCU:    2596150    1948829    1148228    2121294
root@vyos:~# sudo ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX:             256
RX Mini:        0
RX Jumbo:       0
TX:             256
Current hardware settings:
RX:             256
RX Mini:        0
RX Jumbo:       0
TX:             256

Hello @Vamp. You can try use multiqueue for this machine NICs. I don’t have enough experience with proxmox, but I saw some information about it in proxmox docs.

@Dmitry I enable multiqueue with WAN network card (4 core), and do your changes. (without error) Here the result:

vyos@vyos:~$ sudo cat /proc/softirqs
                    CPU0       CPU1       CPU2       CPU3
          HI:          1          0          0          0
       TIMER:     107045     112344     173187     127780
      NET_TX:         23       1018      10435       2758
      NET_RX:       3539     121943     925750       5550
       BLOCK:          0          0          0          0
    IRQ_POLL:          0          0          0          0
     TASKLET:          1          3          2         34
       SCHED:      84858      76682     106472      81292
     HRTIMER:          0          0          0          0
         RCU:     122104     126720     160634     134672

vyos@vyos:~$ sudo ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX:             256
RX Mini:        0
RX Jumbo:       0
TX:             256
Current hardware settings:
RX:             256
RX Mini:        0
RX Jumbo:       0
TX:             256

Seems IRQ utilizations looks better. What about speed?

Similar. But the cpu usage is better now.

May be bottleneck is hypervisor? Is it possible use paththrough for NIC in this case?