NAT for public NTP Server

I want to host a Startum 1 NTP box behind a VyOS router.
I’ve made the “typical” config:
nat {
destination {
rule 10 {
description “NTP Server”
destination {
port 123
inbound-interface eth1
protocol udp
translation {
address 192.168.XX.XX

All is fine: sees my server and slowly rank it up untill it reached 10.
At that point 384Kbps of NTP request hit the poor VyOS box.
At max, I’ve seen more than 25000 entries in the conntrack table.

And … suddenly … the VyOS drops packets and is degraded by

I’ve pushed conntrack values to the max:
system {
conntrack {
expect-table-size 50000000
hash-size 50000000
table-size 50000000

With this aggressive setup, VyOS did survive more … but still was degraded by

See history of the ranking …

While processing internet traffic … cpu load was minimal
There was still memory available
and … there is no log of any failure of any sort.

The CPU is Intel(R) Atom™ CPU N2600 @ 1.60GHz
The box has 2GB ram.

Any hint is more than welcome.

Are you 100% sure it’s the VyOS box dropping traffic and not the NTP server behind it?

Yes, 100% sure.
The box behind is an NTP appliance designed for such load.
On an other site, we have the same appliance … and she is working just fine.


Did you verify you can connect to it from the local private net without loss when outside connections are dropping? The fact the same appliance is working at another site does not necessarily preclude an appliance problem at this one.

Also worth checking for any duplex/negotiation issues in the network path, and also confirm non-NTP traffic experiences problems when NTP traffic does.

Is VyOS, on the same hardware spec, in front of the other sites appliance? If so, is NAT involved there?



Thanks for the “hints” …

When NTP is not performing good the appliance is still perfectly fine.
No VyOS on the other site …

I’ve done some test to try to have more visibility on “what is going wrong”.

I’ve removed the NAT rule and just let the VyOS box serve the ntp … same pattern.
NTP is a lot of very small packets 48 Bytes…

Funny thing, even with disabling the NAT rule, conntrack is still keeping track of UDP packets port 123.
I’ve then added a rule:
system {
conntrack {
ignore {
rule 10 {
description NTP
destination {
port 123
inbound-interface eth1
protocol udp

I’ve also “kind of blindly” tweaked kernel parameters:
net.core.flow_limit_table_len = 8192
net.core.message_burst = 20
net.core.netdev_budget = 600
net.core.netdev_max_backlog = 2000
net.core.optmem_max = 40960
net.core.rmem_max = 4194304
net.core.wmem_max = 4194304
net.ipv4.udp_mem = 95106 126814 190212
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192

kernel.msgmax = 65536
kernel.msgmnb = 65536
kernel.msgmni = 4096

Let’s see how the box does now.