Anomaly with high CPU usage during ping-flood

I dont know if this is due to that I run my VyOS (VyOS 1.4-rolling-202307250317) as a VM-guest in Virtualbox (Ubuntu 22.10 as host using Intel(R) Core™ i5-4250U CPU with hyperthreading enabled and 16GB RAM along with Samsung Pro SSD as storage) or if there is something fishy going on with the VyOS kernel?

I would be happy if somebody with a baremetal to spare could verify if they see the same (or for that matter running VyOS as a VM-guest)?

The VM-guest runs and is configured with 2 VCPU and 8GB of RAM.

When I from the host (who runs the VM-guest) does a ping-flood like so:

sudo ping -f 192.168.1.2 -s 1460

I see through htop on the host that the 4 logical cores in average are utilized to about 50% each.

While htop within VyOS (runned as VM-guest) shows less than 10% CPU usage on a single core.

Since I assume some kind of CPU affinity is going on then increasing the load (throw more packets on the VyOS) would increase the load on the core the interface is currently utilizing through affinity (within VyOS).

So nothing strange so far…

However adding a 2nd ping-flood just as above will make that loaded core in VyOS go from less than 10% CPU usage to about 85% (out of which is about 3/4 kernel CPU time).

The CPU usage on the host increases from about 50% on all 4 logical cores to 65-70% (out of which is about 3/4 kernel CPU time).

Through “bandwidth monitor interface *” I can see give or take a 50% increase in amount of pps (from 8kpps to 11.5kpps) and bps (from 95Mbps to 135Mbps) that the VyOS pushes with 2 concurrent ping-floods vs just 1 but how come the increase of CPU usage wasnt like 10 → 20% instead of what I see 10 → 85%?

So in short, anyone else see this (both on baremetal and as VM-guest) or have an explanation of whats going on?

Along with suggestions of commands to run to verify whats going on internally in case this is related to VyOS itself and not that I run VyOS as a VM-guest?

Configuration of my VyOS-installation (cherrypicked related rows):

set firewall all-ping 'enable'
set firewall broadcast-ping 'disable'
set firewall config-trap 'enable'
set firewall ip-src-route 'disable'
set firewall ipv6-receive-redirects 'disable'
set firewall ipv6-src-route 'disable'
set firewall log-martians 'enable'
set firewall receive-redirects 'disable'
set firewall send-redirects 'enable'
set firewall source-validation 'strict'
set firewall state-policy established action 'accept'
set firewall state-policy invalid action 'drop'
set firewall state-policy related action 'accept'
set firewall syn-cookies 'enable'
set firewall twa-hazards-protection 'disable'

set interfaces ethernet eth1 address 'xxx.xxx.1.2/24'
set interfaces ethernet eth1 description 'LAN1'
set interfaces ethernet eth1 duplex 'auto'
set interfaces ethernet eth1 offload gro
set interfaces ethernet eth1 offload gso
set interfaces ethernet eth1 offload lro
set interfaces ethernet eth1 offload rfs
set interfaces ethernet eth1 offload rps
set interfaces ethernet eth1 offload sg
set interfaces ethernet eth1 offload tso
set interfaces ethernet eth1 ring-buffer rx '4096'
set interfaces ethernet eth1 ring-buffer tx '4096'
set interfaces ethernet eth1 speed 'auto'
set interfaces ethernet eth1 vrf 'INTERNET'

set system conntrack expect-table-size '10485760'
set system conntrack hash-size '10485760'
set system conntrack log icmp new
set system conntrack log other new
set system conntrack log tcp new
set system conntrack log udp new
set system conntrack table-size '10485760'
set system conntrack timeout icmp '10'
set system conntrack timeout other '600'
set system conntrack timeout tcp close '10'
set system conntrack timeout tcp close-wait '30'
set system conntrack timeout tcp established '600'
set system conntrack timeout tcp fin-wait '30'
set system conntrack timeout tcp last-ack '30'
set system conntrack timeout tcp syn-recv '30'
set system conntrack timeout tcp syn-sent '30'
set system conntrack timeout tcp time-wait '30'
set system conntrack timeout udp other '600'
set system conntrack timeout udp stream '600'

set system ip arp table-size '32768'
set system ip disable-directed-broadcast
set system ip multipath layer4-hashing

set system ipv6 disable-forwarding
set system ipv6 multipath layer4-hashing
set system ipv6 neighbor table-size '32768'

set system option performance 'throughput'

set vrf name INTERNET protocols static route xxx.xxx.0.0/0 next-hop xxx.xxx.xxx.xxx distance '1'
set vrf name INTERNET table '101'
set vrf name MGMT table '100'

I can see with “top” (pressing 1) that with a single ping-flood going on towards LAN1 ksoftirqd/1 uses less than 1% CPU.

But when I start that 2nd ping-flood ksoftirqd/1 skyrockets to about 60% CPU on a single core.

Output of “sudo cat /proc/interrupts”:

           CPU0       CPU1       
  0:        119          0   IO-APIC   2-edge      timer
  1:          0          9   IO-APIC   1-edge      i8042
  4:          0         29   IO-APIC   4-edge      ttyS0
  8:          1          0   IO-APIC   8-edge      rtc0
  9:          0          0   IO-APIC   9-fasteoi   acpi
 12:          5          0   IO-APIC  12-edge      i8042
 16:          0    7849671   IO-APIC  16-fasteoi   eth1
 19:    1293633          0   IO-APIC  19-fasteoi   eth0
 24:          0       7565   PCI-MSI 512000-edge      ahci[0000:00:1f.2]
 25:         27          0   PCI-MSI 196608-edge      xhci_hcd
NMI:          0          0   Non-maskable interrupts
LOC:     130438     353924   Local timer interrupts
SPU:          0          0   Spurious interrupts
PMI:          0          0   Performance monitoring interrupts
IWI:        139          2   IRQ work interrupts
RTR:          0          0   APIC ICR read retries
RES:       1257       1585   Rescheduling interrupts
CAL:      88004     691562   Function call interrupts
TLB:         18         71   TLB shootdowns
TRM:          0          0   Thermal event interrupts
THR:          0          0   Threshold APIC interrupts
DFR:          0          0   Deferred Error APIC interrupts
MCE:          0          0   Machine check exceptions
MCP:          7          7   Machine check polls
ERR:          0
MIS:          0
PIN:          0          0   Posted-interrupt notification event
NPI:          0          0   Nested posted-interrupt event
PIW:          0          0   Posted-interrupt wakeup event