Interface IRQ Affinity

Hello,

I have purchased a new Protectli FW4B and got VyOS 1.2.8 (self built image) on it. I noticed that irq-affinity for the interfaces is only using cpu 0 and 1 but not cpu 2 and 3 (this has a 4 core cpu). Is this normal/expected? It looks like the system has multiple rx/tx queues per nic as well. Looking at my old (current) Protectli FW1 I see irq-affinity balances all 4 nic’s across all 4 cpu’s as expected. . . .

irq-affinity[1591]: cpus=4 cores=4 threads=1 sockets=1
irq-affinity[1591]: eth0: assign eth0-TxRx-0 to cpu 0
irq-affinity[1591]: eth0: irq 116 affinity set to 0x1
irq-affinity[1591]: eth0: assign eth0-TxRx-1 to cpu 1
irq-affinity[1591]: eth0: irq 117 affinity set to 0x2

irq-affinity[1594]: cpus=4 cores=4 threads=1 sockets=1
irq-affinity[1594]: eth1: assign eth1-TxRx-0 to cpu 0
irq-affinity[1594]: eth1: irq 121 affinity set to 0x1
irq-affinity[1594]: eth1: assign eth1-TxRx-1 to cpu 1
irq-affinity[1594]: eth1: irq 122 affinity set to 0x2

irq-affinity[1588]: cpus=4 cores=4 threads=1 sockets=1
irq-affinity[1588]: eth2: assign eth2-TxRx-0 to cpu 0
irq-affinity[1588]: eth2: irq 124 affinity set to 0x1
irq-affinity[1588]: eth2: assign eth2-TxRx-1 to cpu 1
irq-affinity[1588]: eth2: irq 125 affinity set to 0x2

irq-affinity[1597]: cpus=4 cores=4 threads=1 sockets=1
irq-affinity[1597]: eth3: assign eth3-TxRx-0 to cpu 0
irq-affinity[1597]: eth3: irq 127 affinity set to 0x1
irq-affinity[1597]: eth3: assign eth3-TxRx-1 to cpu 1
irq-affinity[1597]: eth3: irq 128 affinity set to 0x2

What command is giving you that output you’ve pasted?

The best way to check is to do:

sudo cat /proc/interrupts

And see if they’re being balanced approximatelyequal that way.

Hi @tjh , that output was taken from “show log”. /proc/interrupts shows the same thing, cpu0 and cpu1 are the ones being used for all nics. There were a small number of hits on cpu2 and cpu3 but I’m guessing that was on boot before “irq-affinity” moved them to cpu0 and cpu1 (the numbers are not increasing on those).

vyos@vyos:/proc$ sudo cat interrupts | grep eth
 117:          0          0          0          0   PCI-MSI 524288-edge      eth0
 118:        108          0          5          0   PCI-MSI 524289-edge      eth0-TxRx-0
 119:          0        108          0          5   PCI-MSI 524290-edge      eth0-TxRx-1
 120:          0          0          0          0   PCI-MSI 1048576-edge      eth1
 121:        108          3          0          0   PCI-MSI 1048577-edge      eth1-TxRx-0
 122:          0        108          3          0   PCI-MSI 1048578-edge      eth1-TxRx-1
 123:          0          0          0          0   PCI-MSI 1572864-edge      eth2
 124:        115          0          0          0   PCI-MSI 1572865-edge      eth2-TxRx-0
 125:          0        115          0          0   PCI-MSI 1572866-edge      eth2-TxRx-1
 126:          0          0          0          0   PCI-MSI 2097152-edge      eth3
 127:        107          0          0          6   PCI-MSI 2097153-edge      eth3-TxRx-0
 128:          6        107          0          0   PCI-MSI 2097154-edge      eth3-TxRx-1

Do you have these commands?

set interfaces ethernet eth0 smp-affinity 'auto'
set interfaces ethernet eth1 smp-affinity 'auto'
set interfaces ethernet eth2 smp-affinity 'auto'
set interfaces ethernet eth3 smp-affinity 'auto'

I believe it’s the default, but I have that applied myself and I’m seeing good balancing (admittedly my Vyos is virtualised under KVM)

Hi @tjh

Yes the interfaces were configured with smp-affinity ‘auto’. I have changed this config to manually assign the CPU’s per interface (referencing this link for understanding the values) and now the load is spread across all 4 CPU’s, although, even though each NIC is multi queue is assigns each queue per interface to the same CPU. Guess this will work for now. I’m curious how this is going to work predictably on VyOS 1.3 since smp-affinity has been removed; if for instance I mainly use eth0 and eth1 how would I make sure the the IRQ for these interfaces/queues never share the same CPU. . . . .

vyos@vyos:~$ show config commands | match affinity
set interfaces ethernet eth0 smp-affinity '1'
set interfaces ethernet eth1 smp-affinity '2'
set interfaces ethernet eth2 smp-affinity '4'
set interfaces ethernet eth3 smp-affinity '8'
vyos@vyos:~$
vyos@vyos:~$ sudo cat /proc/interrupts | grep eth
 115:          0          0          0          0   PCI-MSI 524288-edge      eth0
 116:         83          0          5          0   PCI-MSI 524289-edge      eth0-TxRx-0
 117:         83          0          0          5   PCI-MSI 524290-edge      eth0-TxRx-1
 120:          0          0          0          0   PCI-MSI 1048576-edge      eth1
 121:          0         86          0          0   PCI-MSI 1048577-edge      eth1-TxRx-0
 122:          0         83          3          0   PCI-MSI 1048578-edge      eth1-TxRx-1
 123:          0          0          0          0   PCI-MSI 1572864-edge      eth2
 124:          7          0         83          0   PCI-MSI 1572865-edge      eth2-TxRx-0
 125:          0          7         83          0   PCI-MSI 1572866-edge      eth2-TxRx-1
 126:          0          0          0          0   PCI-MSI 2097152-edge      eth3
 127:          0          0          0         89   PCI-MSI 2097153-edge      eth3-TxRx-0
 128:          6          0          0         83   PCI-MSI 2097154-edge      eth3-TxRx-1

Interesting - I wonder why it doesn’t balance properly.
The old Protectli FW1, does it have Intel NICs?

If you think this is a bug I would suggest logging a phabriactor ticket for it - it does seem odd that it dosen’t put the 3rd and 4th NIC on CPU 2 and CPU 3

Yes, the Protectli FW1 has Intel NICs, but, it uses the e1000e driver as opposed to the igb driver on the FW4B. For those interested I added the following to /config/scripts/vyos-preconfig-bootup.script to disable the multi-queue. Once multi-queue was disabled then the smp-affinity ‘auto’ worked as expected assigning the 4 interface IRQ across the 4 CPU. Running NIC multi-queue on a 4 port router with only 4 CPU cores doesn’t make much sense (IMO), unless you, for example, only use 2 of the interfaces, 2 queues per interface, and then have the combined 4 queues for the 2 interfaces spread across the 4 cpus.

As a caveat, I have not put this in “production” yet so I have no performance data to speak of for single queue vs multi queue.

vyos@vyos:~$ cat /config/scripts/vyos-preconfig-bootup.script
# Increase ring buffer
ethtool -G eth0 tx 1024 rx 1024
ethtool -G eth1 tx 1024 rx 1024
ethtool -G eth2 tx 1024 rx 1024
ethtool -G eth3 tx 1024 rx 1024

# Disable NIC multiqueue
ethtool -L eth0 combined 1
ethtool -L eth1 combined 1
ethtool -L eth2 combined 1
ethtool -L eth3 combined 1

vyos@vyos:~$ cat /proc/interrupts | grep eth
 115:          0          0          0          0   PCI-MSI 524288-edge      eth0
 116:        325          0          0          0   PCI-MSI 524289-edge      eth0-TxRx-0
 117:          0          0          0          0   PCI-MSI 1048576-edge      eth1
 120:          0        320          3          0   PCI-MSI 1048577-edge      eth1-TxRx-0
 121:          0          0          0          0   PCI-MSI 1572864-edge      eth2
 122:          7          0        321          0   PCI-MSI 1572865-edge      eth2-TxRx-0
 123:          0          0          0          0   PCI-MSI 2097152-edge      eth3
 124:          0          0          6        321   PCI-MSI 2097153-edge      eth3-TxRx-0

1 Like