I have an vyos in production with a card of 10Gb Intel DA520.
I see the System interrupt is very high in one processor, then I look the queues of interface:
ixgbe driver doesn’t work correctly with RSS+QinQ. Exist some ixgbe patches, but I think exist more easy way to solve this problem. Can you provide information about softirq? sudo cat /proc/softirqs
Also, you can try some network tuning, but be careful. As root sudo su -l
I was remove the QinQ from my eth3, and the queues began to receive packets.
See my rx queues now:
sudo ethtool -S eth3|grep rx_queue_[0-7]_packet
rx_queue_0_packets: 858159375
rx_queue_1_packets: 121687989
rx_queue_2_packets: 117217568
rx_queue_3_packets: 115071130
rx_queue_4_packets: 116948573
rx_queue_5_packets: 115479120
rx_queue_6_packets: 611730132
rx_queue_7_packets: 131488653
With QinQ active the queues from this interface were like this:
sudo ethtool -S eth3|grep rx_queue_[0-7]_packet
rx_queue_0_packets: 108408595351
rx_queue_1_packets: 0
rx_queue_2_packets: 0
rx_queue_3_packets: 0
rx_queue_4_packets: 0
rx_queue_5_packets: 6268
rx_queue_6_packets: 0
rx_queue_7_packets: 0
Now, I need know if have any way to distribute the rx queues in QinQ mode.
Hello, I wrote few messages earlier. You can trying manually enable RPS and RFS, XPS. If it will work correct, I think need find way to implement this feature to cli, like smp-affinity
Follow this example, but change correct interface name, I think you have eth3 which works with QinQ
Hello @maimun.najib , try set proper bitmask for 0 queue
For 18 cores: echo "3ffff" > /sys/class/net/eth0/queues/rx-0/rps_cpus
For 36 cores: echo "fffffffff" > /sys/class/net/eth0/queues/rx-0/rps_cpus
Also if you have enabled HT, you need to disable it.
HT cores technically use the same real cores for IRQs.
Usually required to define bitmask only for rx-0 queue, to confirm this try to run sudo top and press 1