Commit-archive and container-registry and nat-destination in a vrf ... how to?

Hi community,
In lab I wrote a complex configuration with many interfaces and VRFs and containers and so on

The relevant part is


set container registry 10.214.1.254:5000 authentication password ‘HIDDEN’
set container registry 10.214.1.254:5000 authentication username ‘HIDDEN’
set container registry 10.214.1.254:5000 insecure
set firewall ipv4 input filter default-action ‘accept’
set interfaces ethernet eth0 address ‘10.214.1.248/24’
set interfaces ethernet eth0 vrf ‘VRF_MGMT’

set nat destination rule 10 description ‘forward ports 80+443 to container’
set nat destination rule 10 destination port ‘80,443’
set nat destination rule 10 inbound-interface name ‘eth0’
set nat destination rule 10 protocol ‘tcp’
set nat destination rule 10 translation address ‘172.34.56.2’

set service ntp server pool.ntp.org pool
set service ntp vrf ‘VRF_MGMT’
set service ssh access-control allow user ‘vyos’
set service ssh dynamic-protection
set service ssh port ‘22’
set service ssh vrf ‘VRF_MGMT’
set system config-management commit-archive location ‘http://10.214.1.254:4444/backupvyos’

set vrf name VRF_MGMT protocols static route 0.0.0.0/0 next-hop 10.214.1.1
set vrf name VRF_MGMT table ‘100’

If interface eth0 is defined out of any VRF both config-management and container-registry and nat-destination are working well (obviously).
But as I added the interface to a VRF they broke!

I cannot find any vrf option for none of them so my simple question is:

**how should I make them work ? what am I missing ?**

Any advice (or workaround) is well-accepted … except “remove the vrf” :slight_smile:
Thank you.

veth interfaces are useful for this:

set interfaces virtual-ethernet veth1 address '10.0.199.1/30'
set interfaces virtual-ethernet veth1 peer-name 'veth2'
set interfaces virtual-ethernet veth2 address '10.0.199.2/30'
set interfaces virtual-ethernet veth2 peer-name 'veth1'
set interfaces virtual-ethernet veth2 vrf 'VRF_MGMT'

Then just configure routing with static, ospf, bgp, etc…

NOTE: I should add that if you use a default container network type in VyOS, it will use a podman bridge, wihich will spawn veth pairs for every container to the host. Create your user created veth interfaces with a high number like veth1001 and veth1002.

1 Like

thanks for your fast reply

thia way the service is reachable at veth address, not the original one , isn’t It?

If eth0 had a public ip would I have to waste one more public ip?

The veth pair will just be a point-to-point link (virtually) between the default and VRF_MGMT VRFs. This makes any service that only works in the default VRF to believe their traffic is still in the correct VRF.

The only thing that would change is the originating address for the traffic. Right now, when you pull an image or do a commit-archive, you’re likely using 10.214.1.248. If you used the config I gave as an example, it would likely use 10.0.199.1.

commit-archive allows you to set a source address if you wanted to use a different one. add container image does not, so you’d need to NAT to use a diffent address.

If eth0 has a public IP, you’d need to NAT, you wouldn’t need to waste an additional address.

I never used veth before, so please forgive my ignrorance
with

set interfaces virtual-ethernet veth1001 address ‘10.0.199.1/30’
set interfaces virtual-ethernet veth1001 peer-name ‘veth1002’
set interfaces virtual-ethernet veth1002 address ‘10.0.199.2/30’
set interfaces virtual-ethernet veth1002 peer-name ‘veth1001’
set interfaces virtual-ethernet veth1002 vrf ‘VRF_MGMT’

I create a ptp link beteen main-vrf and VRF_MGMT, ok

I have 3 cases to handle

  • nat destination … shoud I use 10.0.199.2 as destination address ? i tried then to nat destination it to container ip but no success

  • commit-archive … should I use 10.0.199.1 as source-address ? i tried then to nat source it but no success

  • add container / container registry … there is no source-address … what should I do ?

I think I’m making my life difficult, right?

You’re not necessarily making your life difficult, but it comes down to how comfortable you are with VRFs and routing in general.

Your DNAT should be to whatever you need that remote traffic to reach. It looks like it’s to a container, so just forward it to the container IP. If the container is in the default VRF, then you’ll need a route from the mgmt VRF to the default VRF. If you’re using host-networking for the container, then it won’t really matter what you DNAT to, as long as the container is listening on that ip/port.

You can use the 10.0.199.1 IP for commit-archive. Just make sure you have an SNAT rule for it.

If you don’t care what IP originates any add container image pulls, then you can just leave that blank and it’ll try to use 10.0.199.1, which will be SNATed (if you configured SNAT that is).

One last note, make sure you have set vrf bind-to-all set.

1 Like

maybe maybe maybe … I did it …
but without set vrf bind-to-all
simply witt static routes and masquerade
I don’t know if it’s the correct way but it seems to work

what I did …
the ptp vethXXXX

set interfaces virtual-ethernet veth1001 address '10.0.199.1/30'
set interfaces virtual-ethernet veth1001 peer-name 'veth1002'
set interfaces virtual-ethernet veth1002 address '10.0.199.2/30'
set interfaces virtual-ethernet veth1002 peer-name 'veth1001'
set interfaces virtual-ethernet veth1002 vrf 'VRF_MGMT'

the static route from vrf_mgmt to container address, and from vrf_main to the registry external address

set protocols static route 10.214.1.254/32 next-hop 10.0.199.2
set vrf name VRF_MGMT protocols static route 172.34.56.2/32 next-hop 10.0.199.1

the SNAT Masquerade from eth0 to the registry external address
the SNAT Masquerade from veth1002 to the container address

set nat source rule 100 outbound-interface name 'eth0'
set nat source rule 100 destination address ‘10.214.1.254'
set nat source rule 100 translation address 'masquerade'
set nat source rule 1002 destination address '172.34.56.2'
set nat source rule 1002 outbound-interface name 'veth1002'
set nat source rule 1002 translation address 'masquerade'

other things are exactly from the previous configuration, the one without VRF, that’s

set firewall ipv4 input filter default-action ‘accept’
set nat destination rule 10 description ‘forward ports 80+443 to container’
set nat destination rule 10 destination port ‘80,443’
set nat destination rule 10 inbound-interface name ‘eth0’
set nat destination rule 10 protocol ‘tcp’
set nat destination rule 10 translation address ‘172.34.56.2’

now ,
after commit no more errors (commit-archive works)
add container works
accessing port 80/443 of 10.214.1.248 I reach the container service

:slight_smile: :slight_smile: :slight_smile: :slight_smile: :slight_smile:
I would say that this is already a step forward … but if there is a more correct way, please tell me

Meanwhile, many thanks @L0crian for the explanation and the tips

That looks mostly good. Your SNAT rule 1002 isn’t needed since you fully control the routing domain. Rule 100 is all you need for SNAT. You can just configure routing for the return traffic.

Yes but the setting full routing (as default gw) for the return traffic, the containers could surf around the world, and I don’t want it.

But … maybe I found it. Setting an TRUSTED group (network-group or address-group) for destination in nat source, and it seems to work.

Everything seems to works well now and the containers are only routed to the TRUSTED addresses, and only reachable from them.

This is now the relevant part of configuration

...
set interfaces ethernet eth0 address '10.214.1.248/24'
set interfaces ethernet eth0 vrf 'VRF_MGMT'
set interfaces virtual-ethernet veth1001 address '10.0.199.1/30'
set interfaces virtual-ethernet veth1001 peer-name 'veth1002'
set interfaces virtual-ethernet veth1002 address '10.0.199.2/30'
set interfaces virtual-ethernet veth1002 peer-name 'veth1001'
set interfaces virtual-ethernet veth1002 vrf 'VRF_MGMT'
...
set container registry 10.214.1.254:5000 authentication password 'HIDDEN'
set container registry 10.214.1.254:5000 authentication username 'HIDDEN'
set container registry 10.214.1.254:5000 insecure
...
set firewall group network-group TRUSTED network '10.214.1.254/32'
set firewall group network-group TRUSTED network 'ww.xx.yy.zz/24'
set firewall group network-group TRUSTED network 'aa.bb.cc.dd/26'
set firewall ipv4 input filter default-action 'accept'
...
set nat destination rule 10 description 'forward ports 80+443 to container'
set nat destination rule 10 destination port '80,443'
set nat destination rule 10 inbound-interface name 'eth0'
set nat destination rule 10 protocol 'tcp'
set nat destination rule 10 translation address '172.34.56.2'
set nat source rule 100 destination group network-group 'TRUSTED'
set nat source rule 100 outbound-interface name 'eth0'
set nat source rule 100 source address '10.0.199.1'
set nat source rule 100 translation address 'masquerade'
...
set service ntp server pool.ntp.org pool
set service ntp vrf 'VRF_MGMT'
set service ssh access-control allow user 'vyos'
set service ssh dynamic-protection
set service ssh port '22'
set service ssh vrf 'VRF_MGMT'
set system config-management commit-archive location 'http://10.214.1.254:4444/backupvyos'
...
set protocols static route 0.0.0.0/0 next-hop 10.0.199.2
set vrf name VRF_MGMT protocols static route 0.0.0.0/0 next-hop 10.214.1.1
set vrf name VRF_MGMT protocols static route 172.34.56.2/32 next-hop 10.0.199.1
set vrf name VRF_MGMT table '100'
...

I think it could be the final configuration … or not ?

You’re not really blocking the containers from accessing the internet, you’re just blocking the internet from knowing how to access them. This means the containers can still fully send UDP packets to the internet, and you’re also potentially exposing your internal IP scheme to the internet by allowing the traffic.

If you don’t want the containers to reach the internet, you’ll want to block that traffic with firewall rules, not by preventing NAT.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.