VYOS Cached Buffer Memory Escalation

We have a HP ProLiant DL360G5 server with a 4-core INTEL XEON processor running VyOS 1.2.9-S1 without any virtualization. We have configured BGP with a remote ASN and we are sending around 300 Mbps of IPT traffic. However, we are facing a problem with the virtual memory usage, which is i


ncreasing by 2 Gbps every two to three days. How can we solve this issue?

This looks like the normal behavior of Linux and it’s memory allocation. Memory is only freed if required. As long as RAM FREE is in good shape I would not have any concern.

Or did you see the Linux OOM-killer reaping processes?

You should also consider upgrading to 1.3 LTS or build your own image or help testing the new LTS 1.4 branch.

1 Like

I assume you meant 2 GB and not 2Gbps (2Gbit/s)?

Also from the look of your graph there doesnt seem to be any increase of that sort as you mentioned.

And as @c-po mentioned thats the nature of Linux to use available memory as cache/buffers when needed to gain performance mainly as a readcache.

The buffers will be dropped once “real” memory for applications are needed.

For example my Ubuntu 23.10 running on a 16GB box after a few days of uptime shows:

user@nuc:~$ free
               total        used        free      shared  buff/cache   available
Mem:        16261928     7810516     2573328     2144856     8381392     8451412
Swap:              0           0           0

While a VyOS 1.5-rolling from november running as a VM guest with 2GB of RAM who is currently basically just hosting a webserver (webfs) shows this after a few days of uptime:

vyos@vyos:~$ free
               total        used        free      shared  buff/cache   available
Mem:         2036932      732000     1032936        2440      425116     1304932
Swap:              0           0           0

You can also do a cat /proc/meminfo to get more stats regarding memory usage of your installation.

Basically “MemAvailable” is the one who tells you true amount of available memory (that is how much is available if all caches and buffers in the kernel are dropped).

That is “MemAvailable” from /proc/meminfo is basically the sum of free + buff/cache as seen by the “free” command.

1 Like

@Apachez Thank you for the correction, its 2GB Virtual Memory that is showing up in SNMP server is escalating day after day, I have also added the screen shot, and thank you for your reply.

Vyos Cache Memory

Whats the output if you login through ssh to that box and run:

free

cat /proc/meminfo

@Apachez this is the deatils below:
sujendra@vyos:~$ cat /proc/meminfo
MemTotal: 4037392 kB
MemFree: 1527208 kB
MemAvailable: 3517292 kB
Buffers: 168240 kB
Cached: 2082204 kB
SwapCached: 0 kB
Active: 335632 kB
Inactive: 2001300 kB
Active(anon): 99228 kB
Inactive(anon): 84256 kB
Active(file): 236404 kB
Inactive(file): 1917044 kB
Unevictable: 4188 kB
Mlocked: 4188 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 84 kB
Writeback: 0 kB
AnonPages: 89072 kB
Mapped: 31844 kB
Shmem: 93612 kB
Slab: 120844 kB
SReclaimable: 80744 kB
SUnreclaim: 40100 kB
KernelStack: 2080 kB
PageTables: 7688 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 2018696 kB
Committed_AS: 382204 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 208 kB
HardwareCorrupted: 0 kB
AnonHugePages: 16384 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 96604 kB
DirectMap2M: 4096000 kB

I dont see anything worrying there.

You got 4037392 kB (4GB) of memory in total.

Out of which 3517292 kB is available.

Meaning actual usage is 4037392 - 3517292 = 520100 so roughly 512 MB is actually being used. The rest are various buffers and caches.

2 Likes

@Apachez Look like its working smoothly, Thank you for your words !