See attached image from zabbix for the past month.
Not sure what commands I would need to find out where it is going.
V1 and V2 are a pair, V1 is primary.
V3 and V4 are a pari, V3 is primary.
They share the same back end network, but the fronts are separate.
Starting around 10th, V2 and V4 started slowly leaking memory, this is about when the V3/4 network went live.
It’s interesting that V2, which is basically idle waiting for V1 to commit suicide, sees a faster loss than V4.
At this time, it is not serious and survey suggests it could go on for about a year. But I have a firewall/router with a 5 year uptime that is just fine, so if we are aiming for excellence…
I’ve had my fair share of commercial vendor memory leaks, CPU spikes and ‘leaks’ (constantly growing CPU usage). No doubt there are examples of highly reliable network OS’s and hardware platforms, but even then, you often have to work through a list of bugs and caveats when assessing the OS release to evaluate whether you would be impacted by any known problems, and still it’s often a case of finding a reliable OS release and sticking with it forever
Not sure about the cause of the memory leak, but it’s Linux underneath, so if you are comfortable becoming root and troubleshooting (top, ps etc…), you could report back some more information.
I’ve been running 1.1.7 in multiple places, in clustered configurations, and memory use has been very consistent.
I’ve worked with Cisco more than any other vendor, and their switches, routers & firewalls are littered with these kind of strange symptom and egg-shell walking configuration caveats and software releases
Check out these links on memory troubleshooting:
See if any suggestions there help you to identiy processes high in memory, or growing in memory.
A little update. (unfortunately, the other VM of this pair V3/V4 is currently dead “yay cloud!”)
Survey suggests the memory is being eaten by conntrack.