Slow PPPoE speed

Hello there!

My config:

VyOS 1.4.0 built from source.

Installed on bare metal:

CPU Model: Intel(R) N100
Cores: 4

Total RAM : 15.39 GB

My Net speed: 1 Gbit

My ISP supports using a dedicated router via PPPoE (with VLAN 20). Therefore I have configured as shown in the attached commands.

It’s working, meaning I get an IP and can browse without issues, however I get slow Download speeds and inconsistent speeds. I am unable to consistently saturate above 900 Mbps. Eg. running 5 consequent speed tests provide me with speeds from around 500 Mbps to 920 Mbps, but never the same. Curiously enough on upload speeds I can get above 800-900 without issues, which for me is acceptable.

Checking with iperf from my directly connected PC I get good results:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 1.05 GBytes 902 Mbits/sec sender
[ 5] 0.00-10.01 sec 1.05 GBytes 898 Mbits/sec receiver

I have browsed around the forum and I have applied optimizations mentioned in posts such as PPPoE gigabit network, high CPU usage - #13 by Dmitry

My /config/scripts/vyos-postconfig-bootup.script already includes the following configuration (eth0 is WAN, eth1 is LAN):

ethtool -G eth0 tx 4096 rx 4096
ethtool -G eth1 tx 4096 rx 4096
ethtool -G eth2 tx 4096 rx 4096

echo “32768” > /proc/sys/net/core/rps_sock_flow_entries

echo “f” > /sys/class/net/eth0/queues/rx-0/rps_cpus
echo “f” > /sys/class/net/eth0/queues/rx-1/rps_cpus
echo “f” > /sys/class/net/eth0/queues/rx-2/rps_cpus
echo “f” > /sys/class/net/eth0/queues/rx-3/rps_cpus

echo “f” > /sys/class/net/eth1/queues/rx-0/rps_cpus
echo “f” > /sys/class/net/eth1/queues/rx-1/rps_cpus
echo “f” > /sys/class/net/eth1/queues/rx-2/rps_cpus
echo “f” > /sys/class/net/eth1/queues/rx-3/rps_cpus

echo “2048” > /sys/class/net/eth0/queues/rx-0/rps_flow_cnt
echo “2048” > /sys/class/net/eth0/queues/rx-1/rps_flow_cnt
echo “2048” > /sys/class/net/eth0/queues/rx-2/rps_flow_cnt
echo “2048” > /sys/class/net/eth0/queues/rx-3/rps_flow_cnt

echo “2048” > /sys/class/net/eth1/queues/rx-0/rps_flow_cnt
echo “2048” > /sys/class/net/eth1/queues/rx-1/rps_flow_cnt
echo “2048” > /sys/class/net/eth1/queues/rx-2/rps_flow_cnt
echo “2048” > /sys/class/net/eth1/queues/rx-3/rps_flow_cnt


vyos@vyos:~$ sudo ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX: 4096
RX Mini: n/a
RX Jumbo: n/a
TX: 4096
TX push buff len: n/a
Current hardware settings:
RX: 4096
RX Mini: n/a
RX Jumbo: n/a
TX: 4096
RX Buf Len: n/a
CQE Size: n/a
TX Push: off
RX Push: off
TX push buff len: n/a
TCP data split: n/a

vyos@vyos:~$ sudo cat /proc/softirqs
CPU0 CPU1 CPU2 CPU3
HI: 3 2 2 16
TIMER: 669660 911469 2143909 589199
NET_TX: 52008 49744 65875 27888
NET_RX: 1643389 1510334 2171947 1799391
BLOCK: 4258 3822 3689 3442
IRQ_POLL: 0 0 0 0
TASKLET: 38 46 36 81
SCHED: 3284714 1978852 2268129 1137381
HRTIMER: 0 0 0 0
RCU: 1736501 1729023 1805250 1756780

I have also tried other firewall vendors, like OPNSense and Pfsense, virtualized and on
bare metal and so far the best performance I have gotten is with VyOS.

Is there something else I can try to improve the performance ? Is there
and issue at hardware level that’s preventing me to get higher speeds ?

Thank you!
configuration-.txt (3.3 KB)

Your hardware is just underpowered I’m sorry.
The only suggestion I can give is to look into flow offload (flowtables)- that will help improve performance a little more.

EDIT: I’m very wrong, as below, this CPU should be plenty.

Having an uploadspeed of about 900Mbps show that the box isnt “underpowered”.

Sure there are faster CPU’s but it should be able to sature a 1Gbps connection.

I would first make sure to use something newer than 1.4.0, try latest rolling instead.

Then I would make sure that VyOS is installed natively on this box and not being runned through some VM.

Then I would experiment (one at a time) the various offloading settings for the interfaces to see which have the effect of increase performance (and lower CPU utilization).

Since you also use PPPoE I would also see if you can run without PPPoE in between and if not possible then make sure that your host(s) downstream have their MTU’s adjusted to whatever MTU will be leftover after PPPoE have chewed up its bytes (protip: it will no longer be MTU 1500 bytes for your hosts).

I would also try using wget or curl and output into /dev/null and try to download some dense file from a speedtest site just to use that as a benchmark if the error is between your VyOS box and the ISP or between your VyOS box and your downstream hosts.

After that you can look into optimizing using flowtables or even VPP/DPDK.

While at it take a look into your BIOS on how the power consumption is configured so it isnt crippled by that.

2 Likes

While I know that PPPoE is a CPU hog, I would’ve expected an N100 to handle it at 1 Gbps at least. After all, an N100 will happily route at 10 Gbps without PPPoE from what I’ve seen.

I would definitely try enabling flow offloading to reduce CPU usage. It can make a massive difference.

Btw. you are needlessly using your script to set some settings that you can just set using the VyOS CLI.

1 Like

your box is powerful enough to run 1gbps (in reality you will usually get 940 not more) you dont need to do any additional cpu settings whatsoever your config looks fine ,just remove any manually set mtu for any interface since For PPPoE connection on specific VLAN you could set MTU to 1500
Start from scratch again.
Make sure when you boot from installation usb you format your hard drive first by running this command

sudo mkfs -t ext4 /dev/sdb1 (adjust sdb1 according to your drive name)

Sorry! First time I am running vyos for the first time and was following directions from another posts.

If some of these settings are already included, would you tell me which ones? If you are kind enough to tell me the exact command I will appreciate it.

Thanks

Some ISPs use MTU = 1508, so you can handle default 1500 bytes MTU on your clients.
VLAN tag also requires 4 bytes, requiring 1512
If you can’t do baby-jumbo:
You can test with 1492 MTU on client, I’d prefer mss-clamp set to 1452 so you don’t have to touch client settings. Which at least handlesTCP

If you go for full ethernetframes then those are 1518 bytes untagged and 1522 bytes 802.1Q tagged.

But thats L2-MTU.

When you normally speak about 1500 bytes MTU thats refered to L3-MTU that is without the L2 header (which normally is ethernet).

No worries. :grinning: There is lots of slightly outdated information out there.

At least the ring buffer settings are available via the CLI:

https://docs.vyos.io/en/latest/configuration/interfaces/ethernet.html#cfgcmd-set-interface-ethernet-interface-ring-buffer-rx-value

Some of the other ones might be too, I haven’t checked.

Hi,
Yes sorry for the original comment, you’re right I’ve just looked at the specs of that CPU and it should be plenty powerful.

I agree with others - checking MTU as you might be getting fragmentation. Turning flow offload should certainly help a lot too, I’ve seen great performance improvements using it on a box that couldn’t do rate-shaping properly, but could with the flow offload improvements.

I would make sure you’re testing during the same time with your laptop and your VyOS box, I wonder if your ISP isn’t actually giving you the full 1G.

1 Like

I can confirm that with my N100 and 8 GB RAM, on VyOS 1.5 rolling, I easily reach 2500/1000 via PPPoE (2200/2300 real). But still the N100 manages to do much more.

This is such great news for me :slight_smile:

I am probably going to test the suggestions given to me so far, but… if possible would you share with me your configuration or let me know if you needed to perform any of the optimizations I detailed above ?

Based on this comment, I’m thinking on trying the latest rolling update and starting from scratch.

Thanks in advance

I didn’t make any particular optimization. It worked perfectly out of the box.

  • What NICs does your machine have?
  • Have you enabled offload on the Ethernet ports?

Below you’ll find the relevant parts:

...

set interfaces ethernet eth0 hw-id '***'
set interfaces ethernet eth0 offload gro
set interfaces ethernet eth0 offload gso
set interfaces ethernet eth0 offload sg
set interfaces ethernet eth0 offload tso
set interfaces ethernet eth0 vif 10 address '10.0.10.1/24'
set interfaces ethernet eth0 vif 10 description 'VLAN-MAIN'
set interfaces ethernet eth0 vif 20 address '10.0.20.1/24'
set interfaces ethernet eth0 vif 20 description 'VLAN-IOT'
set interfaces ethernet eth0 vif 30 address '10.0.30.1/24'
set interfaces ethernet eth0 vif 30 description 'VLAN-GUEST'
set interfaces ethernet eth1 hw-id '***'
set interfaces ethernet eth1 offload gro
set interfaces ethernet eth1 offload gso
set interfaces ethernet eth1 offload sg
set interfaces ethernet eth1 offload tso
set interfaces ethernet eth1 vif 100 description 'VLAN-PPPOE'
set interfaces ethernet eth2 hw-id '***'
set interfaces ethernet eth2 offload gro
set interfaces ethernet eth2 offload gso
set interfaces ethernet eth2 offload sg
set interfaces ethernet eth2 offload tso
set interfaces ethernet eth3 hw-id '***'
set interfaces ethernet eth3 offload gro
set interfaces ethernet eth3 offload gso
set interfaces ethernet eth3 offload sg
set interfaces ethernet eth3 offload tso

...

set interfaces pppoe pppoe0 authentication password '***'
set interfaces pppoe pppoe0 authentication username '***'
set interfaces pppoe pppoe0 ip adjust-mss 'clamp-mss-to-pmtu'
set interfaces pppoe pppoe0 no-peer-dns
set interfaces pppoe pppoe0 source-interface 'eth1.100'

...

Let me know if you need anything else.

So, a few things.

  1. Interfaces are 01:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04) & offload is enabled by default in the interfaces as far as I can tell.

  2. I believe I can confirm that the proper mtu for my ISP is 1942:

ping example.com -D -s 1464 
PING example.com (96.7.128.175): 1464 data bytes
1472 bytes from 96.7.128.175: icmp_seq=0 ttl=53 time=164.038 ms
1472 bytes from 96.7.128.175: icmp_seq=1 ttl=53 time=164.571 ms
ping example.com -D -s 1465                                                    
PING example.com (96.7.128.175): 1465 data bytes
556 bytes from 192.168.1.1: frag needed and DF set (MTU 1492)
 4  5  00 d505 b632   0 0000  40  01 dc25 192.168.1.113  96.7.128.175

Also, removing all MTU setting, just gives me same result (when I show interfaces it still appears at 1492).

  1. to answer @tjh I can also confirm that my ISP is giving me full speed, due to having an ISP modem and when I connect directly to it my speeds are adequate (above 900 Mbps wired). Basically I am working on this project to be able to stop using the ISP’s device. Luckily I have the ONT in a separate device, so I am just running the ethernet cable to the WAN port of my N100 box, without using the ISP modem during the tests.

Unfortunately I’ll need a bit more time to test the flowtables as I’ve never implemented it before. The fact that it works out of the box for @Marvitex makes me think that perhaps I have faulty hardware, but I haven’t given up yet.

Probably I will start from scratch with latest rolling update this weekend and see how it goes. In the meantime if you think of anything else I can test in the meantime let me know.

Thank you for all your help so far!

Same NICs here:

lspci | grep Ethernet
01:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
03:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
04:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)

And yes, Ethernet offloads were enabled by default.
My PPPoE interface also has an MTU of 1492.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.