Interface slow in one direction

I’m experiencing an issue which I don’t really understand.
My setup looks as follows:

  • 2 hosts in “LAN” network connected to 10Gbit/s switch
  • 1 of the hosts has two interfaces with 803.2ad (10Gb/s each) set up
  • vyos host also connected to switch, also with 803.2ad (2x 10Gbit/s), NIC is Mellanox ConnectX-2, which is the same as the other 803.2ad host which doesn’t seem to have this issue, also running Linux (Debian)

The 803.2ad ports are configured on the switch (Mikrotik CRS317-1G-16S+), so that shouldn’t be the issue.

When I run iperf as a client between the two (non-router) hosts, I get ~10Gb/s regardless of which host is the server and which is the client. However, if I run the same test with the router, I get 10Gb/s to either host, if the router is the client. If the router is hosting the iperf server, I only get 4-5Gb/s, which I don’t really understand. Both interfaces in the bond are running at 10Gb/s and full duplex, according to ethtool.

I also tried removing one of the interfaces from the bond and running the iperf only via that interface to check if the issue is related to 803.2ad, but the behavior is the same.

I do get some error messages when the bond comes up (e.g., after changing some settings on the switch side, but it seems fine?):

[  893.292330] mlx4_en: eth2: Link Down
[  893.292756] bond0: (slave eth2): speed changed to 0 on port 2
[  893.317749] bond0: (slave eth2): link status definitely down, disabling slave
[  893.317764] mlx4_core 0000:01:00.0: Failed to bond device: -95
[  893.323720] mlx4_en: eth2: Fail to bond device
[  893.342384] mlx4_en: eth3: Link Down
[  893.342864] bond0: (slave eth3): speed changed to 0 on port 1
[  893.397292] mlx4_en: eth3: Link Up
[  893.421767] bond0: (slave eth3): link status up again after 0 ms
[  893.421983] bond0: (slave eth3): link status definitely up, 10000 Mbps full duplex
[  893.422012] mlx4_core 0000:01:00.0: Failed to bond device: -95
[  893.428116] mlx4_en: eth3: Fail to bond device
[  893.547299] mlx4_en: eth2: Link Up
[  893.629975] bond0: (slave eth2): link status definitely up, 10000 Mbps full duplex
[  893.630003] mlx4_core 0000:01:00.0: Failed to bond device: -95
[  893.636169] mlx4_en: eth2: Fail to bond device

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.10.17-amd64-vyos

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable

Slave Interface: eth3
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 00:02:c9:0f:da:59
Slave queue ID: 0
Aggregator ID: 5
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0

Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: 00:02:c9:0f:da:58
Slave queue ID: 0
Aggregator ID: 5
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0

Try to check/disable flow control/pause frames on MikroTik

Flow control is already disabled for all involved ports.
I played around with the hashing policy for both bonds (was layer 2, now layer 3+4) and I noticed some odd behavior with multiple processes for iperf (not sure if hashing policy is related, but possible).

When I run iperf without -P, I reliably get ~4.5Gbit/s and only one of the interfaces is used.
When I run iperf with -P 2, I very rarely suddenly get ~8.5Gbit/s, even if all traffic is transmitted via one interface (according to the MikroTik interface table). The higher I set the process count, the more consistently I get more bandwidth (spread over both interfaces, of course), e.g., if I set the process count to something like 32 I get up to ~15.5Gbit/s, but sometimes I’ll randomly only get around ~10Gbit/s.

Did some more testing, and I noticed that for some reason the CPU on the router is at 100% percent if it’s hosting the server (at least on one core, if only running a single process, or all of them if more).
So this seems to be a CPU limit, rather than a limit of the NIC.

This is kind of surprising to me, as the router is running an E3-1220 v3, so a quadcore at 3.1GHz. It’s not the newest processor but I think it should be handle this easily or am I mistaken?
Is there some hardware offloading that I should enable for the NIC / driver? The other hosts seem to be able to handle the traffic without any sort of large increase in CPU load.

Ok, after realizing this is was fairly simple to fix.
This is actually a well known problem, especially with the ConnectX-2: Mellanox ConnectX-2 EN and Windows 10? | ServeTheHome Forums

TL;DR: Make sure to enable Jumbo frames for the entire network path, possibly increase send and receive buffers and that’s basically it.

I now get 10Gb/s without any issues.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.