Mapped Ports to Container

Hello again. I have a container running pihole, but cannot access the mapped ports. They show up as listening on those ports, but no traffic is passed through. Is there some nat rules that are needed beyond the container setup? The docs for container and the example given show mapping ports, but don’t mention any additional nat might be needed:

container {
    name pihole {
        cap-add net-admin
        cap-add net-raw
        cap-add net-bind-service
        description "pihole dns server"
        environment DNS1 {
            value 1.1.1.3
        }
        environment DNS2 {
            value 1.0.0.3
        }
        environment DNSMASQ_LISTENING {
            value all
        }
        environment TZ {
            value America/Denver
        }
        environment WEBPASSWORD {
            value xxxxxx
        }
        image pihole:latest
        memory 1024
        network NET {
        }
        port dns_tcp {
            destination 53
            protocol tcp
            source 53
        }
        port dns_udp {
            destination 53
            protocol udp
            source 53
        }
        port http {
            destination 80
            protocol tcp
            source 8080
        }
        restart on-failure
        volume dnsmasq.d {
            destination /etc/dnsmasq.d
            source /config/pihole/dnsmasq.d
        }
        volume log {
            destination /var/log/pihole
            source /config/pihole/log
        }
        volume pihole {
            destination /etc/pihole
            source /config/pihole/etc
        }
    }
    network NET {
        prefix 10.88.0.0/24
    }
}

If I change to use host networking, then it all works. So I think the rest of the config is good to go.
If I enter the container with podman exec, then I can curl localhost:80 and get the page I’m expecting.
Telnet to port 8080 from my LAN device works, so I don’t think it’s a firewall issue. LAN to LOCAL is wide open.

It seems you have to configure own NAT rules as podman does not do NAT itslelf

Any chance you have an idea on what the nat rule should look like. This is my network for podman as shown in ifconfig:

cni-NET: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.88.0.1  netmask 255.255.255.0  broadcast 10.88.0.255
        ether 82:d1:5e:9a:e4:fb  txqueuelen 1000  (Ethernet)
        RX packets 946  bytes 62424 (60.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 41  bytes 1722 (1.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

But when I tried to create a source nat rule using “cni-NET”, it says that interface does not exist on the system.
If I understand destination nat (port forward), I don’t think that would be what I need. The container has an IP of 10.88.0.2 which I can’t even see or connect to from my vyos host.

Sorry for what might seem like such basic questions.

Could in fact be a destination rule and the reason 10.88.0.2 is blind to me is the firewall. But then, how do I assign the cni-NET to a zone, if it’s not recognized…?

I recommend adding a permanent address, for example

set container name pihole network NET address 10.88.0.2

At least it should be source NAT to get Internet access for the Internal network 10.88.0.0/24

set nat source rule 100 outbound-interface 'eth0'
set nat source rule 100 source address '10.88.0.0/24'
set nat source rule 100 translation address 'masquerade'

Where ETH0 - WAN interface

1 Like

Spent some more time on it.

Inside the container, I have internet. Matches my outbound nat.

I was able to put cni-NET into the zone for LAN. And set a DNAT for anything to port 8080 on any interface, to translate to port 80 on the container ip of 10.88.0.3. And from a LAN client, I can load the web server by accessing the router:8080.

But, the router itself cannot access it. When I attempt curl http://localhost:8080, it just hangs. No denies in the logs. Just hangs. Almost like the DNAT is not applying when the traffic originates from the router. But, I put “any” as the inbound-interface in the DNAT rule.

Because the router is a member of the bridge for the container network, I can access the container directly at http://10.88.0.3:80, but would like to know why the local system cannot access via DNAT.

Because in DNAT you use inbound-interface
You can try to play with virtual-ip

set interfaces dummy dum0 address '203.0.113.1/32'
set high-availability virtual-server 203.0.113.1 port '8080'
set high-availability virtual-server 203.0.113.1 real-server 10.88.0.3 port '80'

That didn’t work either. It worked for LAN devices, but not from local.
There are multiple ways that I can forward traffic from the vyos host into the container. A DNAT rule works listening on “all” interfaces. The VIP worked, when I connect to the VIP address as my destination.

But none of these strategies work from “localhost”. When I am ssh’d in to vyos, and I attempt to access any of the local IP’s (127.0.0.1, 10.0.0.1, 203.0.113.1, etc.), the port 8081 that is being forwarded appears closed.

sudo nmap localhost -p 8081
Starting Nmap 7.80 ( https://nmap.org ) at 2022-11-02 20:14 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00015s latency).
Other addresses for localhost (not scanned): ::1

PORT     STATE  SERVICE
8081/tcp closed blackice-icecap

Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds

But, if I go straight to the container IP on port 80, 10.88.0.3:80, loads right away.
It appears that any strategy of forwarding ports (DNAT or VIP) is not engaged when coming from the local machine, even if the incoming is set to all interfaces.

Any ideas if DNAT is supposed to ignore local initiated requests?

I found this. See the 1 answer. Do you think this is what I’m running in to?

Hi,
Im not 100% what you want to archive.
But I use a similar setup.
I have a VyOS router with a Adguard container as my local DNS Server / Filter.

My container config looks like this:

set container name adguard allow-host-networks

set container name adguard cap-add ‘net-bind-service’

set container name adguard image ‘adguard/adguardhome’

set container name adguard port admin-80 destination ‘80’

set container name adguard port admin-80 protocol ‘tcp’

set container name adguard port admin-80 source ‘80’

set container name adguard port admin-443 destination ‘443’

set container name adguard port admin-443 protocol ‘tcp’

set container name adguard port admin-443 source ‘443’

set container name adguard port admin-3000 destination ‘3000’

set container name adguard port admin-3000 protocol ‘tcp’

set container name adguard port admin-3000 source ‘3000’

set container name adguard port tcp-dns destination ‘53’

set container name adguard port tcp-dns protocol ‘tcp’

set container name adguard port tcp-dns source ‘53’

set container name adguard port udp-dns destination ‘53’

set container name adguard port udp-dns protocol ‘udp’

set container name adguard port udp-dns source ‘53’

I use a additional IPv4 address on the loopback0:
set interfaces loopback lo address ‘10.222.222.222/32’

This has the advantage that on different routers I have everywhere the same lo0 IP
so all clients can always use the same DNS Server IP

Maybe that helps a bit
Cheers
Marcel