Traffic-policy limiter trouble with tcp on 1.2.x

Hi,

I met the problem about using traffic-policy limiter on ingress traffic which impact the TCP performance.

My configuration:

set traffic-policy limiter 10M default bandwidth '10mbit'
set traffic-policy limiter 10M default burst '100kb'
set interfaces ethernet eth0 traffic-policy in '10M'

Test with iperf3:

iperf3 -c 2.2.2.1 -t 100 -i 1 -b 10m
Connecting to host 2.2.2.1, port 5201
[  4] local 2.2.2.2 port 49480 connected to 2.2.2.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  83.4 KBytes   683 Kbits/sec   17   2.78 KBytes       
[  4]   1.00-2.00   sec   341 KBytes  2.79 Mbits/sec   93   2.78 KBytes       
[  4]   2.00-3.00   sec   456 KBytes  3.74 Mbits/sec  118   2.78 KBytes       
[  4]   3.00-4.00   sec   381 KBytes  3.12 Mbits/sec  118   2.78 KBytes       
[  4]   4.00-5.00   sec   458 KBytes  3.75 Mbits/sec  124   2.78 KBytes       
[  4]   5.00-6.00   sec   381 KBytes  3.12 Mbits/sec  122   2.78 KBytes       
[  4]   6.00-7.00   sec   459 KBytes  3.76 Mbits/sec  122   2.78 KBytes       
[  4]   7.00-8.00   sec   459 KBytes  3.76 Mbits/sec  122   2.78 KBytes       
[  4]   8.00-9.00   sec   380 KBytes  3.11 Mbits/sec  120   2.78 KBytes       
[  4]   9.00-10.00  sec   459 KBytes  3.76 Mbits/sec  120   2.78 KBytes       
[  4]  10.00-11.00  sec   380 KBytes  3.11 Mbits/sec  116   2.78 KBytes

But once I removed the traffic-policy limiter, then TCP traffic is back to normal:

iperf3 -c 2.2.2.1 -t 100 -i 1 -b 10m
Connecting to host 2.2.2.1, port 5201
[  4] local 2.2.2.2 port 49484 connected to 2.2.2.1 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.08 MBytes  9.07 Mbits/sec    0   93.2 KBytes       
[  4]   1.00-2.00   sec  1.25 MBytes  10.5 Mbits/sec    0    125 KBytes       
[  4]   2.00-3.00   sec  1.25 MBytes  10.5 Mbits/sec    0    131 KBytes       
[  4]   3.00-4.00   sec  1.12 MBytes  9.44 Mbits/sec    0    131 KBytes       
[  4]   4.00-5.00   sec  1.25 MBytes  10.5 Mbits/sec    0    131 KBytes       
[  4]   5.00-6.00   sec  1.12 MBytes  9.44 Mbits/sec    0    131 KBytes       
[  4]   6.00-7.00   sec  1.25 MBytes  10.5 Mbits/sec    0    131 KBytes       
[  4]   7.00-8.00   sec  1.12 MBytes  9.44 Mbits/sec    0    131 KBytes       
[  4]   8.00-9.00   sec  1.25 MBytes  10.5 Mbits/sec    0    131 KBytes       
[  4]   9.00-10.00  sec  1.12 MBytes  9.43 Mbits/sec    0    131 KBytes       
[  4]  10.00-11.00  sec  1.25 MBytes  10.5 Mbits/sec    0    131 KBytes     

Someone reported the similar problem in forum on VyOS 1.1.7 but without good result (traffic-policy limiter trouble with tcp? - #2 by manu), it seems the problem still exists in 1.2.x since I have tried 1.2.1, 1.2.3 and the latest rolling version, they all have this problem.

Anyone have idea about this problem?

best regards.

Someone can help? Out of some reason, we can’t use shaping by ifb in ingress traffic, so limiter is only way left for ingress direction, but the TCP is no good with this policy

Hello @MapleWang, did you try disable NICs offloads?

set interfaces ethernet eth0 offload-options generic-receive off
set interfaces ethernet eth0 offload-options tcp-segmentation off
set interfaces ethernet eth0 offload-options generic-segmentation off

Run this on both vyos routers under test.

@Dmitry,

Disable offloading is working, but I’m just worried that this action just affects too much especially on hardware router.

I noticed that tc filter configured by traffic policy limiter is like below:

tc filter show dev eth0 root
filter protocol all pref 255 basic chain 0 
filter protocol all pref 255 basic chain 0 handle 0x1 flowid ffff:1 
	action order 1:  police 0x2 rate 10Mbit burst 1024Kb mtu 2Kb action drop overhead 0b 
	ref 1 bind 1

The mtu is 2k bytes in that filter rule and vyos doesn’t provide any option to modify this mtu value. In my understanding, if the offloading is enabled, the packet size between ip stack and nic could be 65536 bytes in maximum, it means that tc with 2k bytes mtu will drop most of packet exceeding this limit, I think this is possible cause of problem.

I enlarged mtu to 100k by tc command directly, that problem is gone.

I recommend that either modifying the default mtu for limiter when generating the tc command or provide the mtu option in the limiter configuration.

best regards.

Hello @MapleWang, adding this feature request (make mtu configurable) on https://phabricator.vyos.net/ I think will be good point.