BGPv6 session causing consistent 100% CPU

Hello everyone and apologies for my first post being a troubleshooting issue, but I’m completely stumped:
Using self-built VyOS 1.3 (HEAD of equuleus at the time - git commit 2f691bb2f61e96d832ca116e388c85cfec1f5ff7) I have a single bgp6 session that drives zebra totally nuts, using 100% of four cpu cores until i shut it down.
This is running on bare metal repurposed Checkpoint P-210 (Core i5-750).
VyOS is running another 2 BGP IPv4 sessions - one internal and one external - and an internal IPv6 BGP session without breaking a sweat. I added this external IPv6 BGP session and things went wild.
For the purposes of narrowing it down, I shut down the internal BGP session and captured whatever is going on the wire and nothing peculiar stood out.
The capture is here.
Any ideas would be appreciated.

Hi

I couldn’t open that file ,but it is possible to run show <ip|ipv6> bgp neighbors <address> (both address-family) and it also verify that you have setting :

set protocols bgp <asn> parameters default no-ipv4-unicast

another advice is that you should check your current route-maps and filter , if they are correct .

The file is a xz’ed tcpdump cap file and I just tried downloading it and it opened with wireshark just fine after ``xz -d’’ . Anyhow, the capture file merely demonstrates that the peer does not appear to be doing anything nasty, at least to my eyes.

show ip bgp neighbors 2a01:d0:7fff:102::1 

and

show ipv6 bgp neighbors 2a01:d0:7fff:102::1

do not exhibit any differences, although

show ip bgp summary

says NoNeg for this peer, while
show ipv6 bgp summary correctly says

Neighbor            V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt
2a01:d0:7fff:102::1 4      xxxxx    716997       711        0    0    0 02:02:14       138198        1

In all, it all looks in order, apart from zebra consuming all cpu.
The complete output of show ip bgp neighbors 2a01:d0:7fff:102::1 is ~2k of text. Should I paste it here ?

set protocols bgp <asn> parameters default no-ipv4-unicast

this is already there

another advice is that you should check your current route-maps and filter , if they are correct .

fairly standard rpki stuff and bogon filters that seem to work for every other peer.

Edit: I just altered the import filter-list to only allow paths the reside completely only inside the peer’s AS and deny everything else

set policy as-path-list as-importpaths rule 10 action 'permit'
set policy as-path-list as-importpaths rule 10 regex '^[peer AS]$' 
set policy as-path-list as-importpaths rule 20 action 'deny'
set policy as-path-list as-importpaths rule 20 regex '.*'
set protocols bgp [my AS] neighbor 2a01:d0:7fff:102::1 address-family ipv6-unicast filter-list import 'as-importpaths'

and the situation did not improve. zebra still chews through 100% of 4 cpu cores.

Hi

regarding this question :

The complete output of show ip bgp neighbors 2a01:d0:7fff:102::1 is ~2k of text. Should I paste it here ?

it would be great if you could share that information (here / elsewhere) , maybe there is something that it could be useful ,this log with the process with high cpu :

/var/log/atop

it should show when the cpu increase (time) and you can associate with an event(or setting)

it would be great if you could share that information (here / elsewhere) , maybe there is something that it could be useful ,this log with the process with high cpu

Well here is the output

And here’s the atop log you asked for. Bear in mind that I only had the bgp session running between ~20:00 and ~21:00 local time (UTC+2),
because while the BGP session is up, ntpd spams about 10 messages/sec routing socket reports: No buffer space available in the system log which is obviously due to the stress the system is being put under and stops as soon as I shutdown the session.

thanks for sharing , it seems to be working well (peer 2a01:d0:7fff:102::1) . Another possibility is you have enabled snmp in your current instance and the pulls cause high cpu.(could you check it ?)

also, it is possible to verify that the values of cpu/ram are correct for this purpose.

it seems to be working well (peer 2a01:d0:7fff:102::1)

it’s working fine, but appears to demand way more than any other BGP session, cpu-wise. It’s worse than a `reset ip bgp [peer]’’ and that only lasts for a couple of seconds.

Another possibility is you have enabled snmp in your current instance and the pulls cause high cpu.(could you check it ?)

snmpd is enabled and running as per default, but nothing pulls data off it and it is firewalled anyhow. shutting it down did not make a difference.

also, it is possible to verify that the values of cpu/ram are correct for this purpose.

perhaps I misunderstand, but i7-750 and 8GB of RAM should be plenty for such a setup - I have numerous others of comparable specs and none exhibit such behavior.