Gre-bridge tunneling for multicast between two VyOS routers

Hi there, i have been trying to set up a gre tunnel between two VyOS routers to send some multicast streams between them. Although i followed all the wiki guides i can’t get to set up the tunnel correctly. Everything is running virtually except the multicast source. I have a VM with linux connected via internal network to the R1, R1 with R2 also with internal network, and R2 directly bridged to the network with the multicast stream source.
R1 config:

bridge br1 {
address 3.3.3.1/30
address 10.128.0.221/16
aging 300
description “BRIDGE TO OUT”
hello-time 2
max-age 20
priority 0
stp false
}
ethernet eth1 {
bridge-group {
bridge br1
}
description “INT TO SEBA94”
duplex auto
hw-id 08:00:27:78:2a:5b
smp_affinity auto
speed auto
}
ethernet eth2 {
address 10.0.2.1/24
description R2R
duplex auto
hw-id 08:00:27:de:41:93
smp_affinity auto
speed auto
}
ethernet eth3 {
address 11.11.11.2/24
description PUTTY-CONNECTION
duplex auto
hw-id 08:00:27:91:d0:2f
smp_affinity auto
speed auto
}
loopback lo {
}
tunnel tun1 {
encapsulation gre-bridge
local-ip 10.0.2.1
multicast enable
parameters {
ip {
bridge-group {
bridge br1
}
}
}
remote-ip 10.0.2.2
}


R2 config:

bridge br1 {
address 3.3.3.2/30
address 10.128.0.220/16
aging 300
description “BRIDGE TUN1-ETH1”
hello-time 2
max-age 20
priority 0
stp false
}
ethernet eth1 {
bridge-group {
bridge br1
}
description “INT TO MCAST”
duplex auto
hw-id 08:00:27:1c:97:76
smp_affinity auto
speed auto
}
ethernet eth2 {
address 10.0.2.2/24
description R2R
duplex auto
hw-id 08:00:27:41:1b:a1
smp_affinity auto
speed auto
}
ethernet eth3 {
address 12.12.12.2/24
description PUTTY-CONNECTION
duplex auto
hw-id 08:00:27:a3:15:e1
smp_affinity auto
speed auto
}
loopback lo {
}
tunnel tun1 {
encapsulation gre-bridge
local-ip 10.0.2.2
multicast enable
parameters {
ip {
bridge-group {
bridge br1
}
}
}
remote-ip 10.0.2.1
}

Ping between bridges works, but not multicast. Also pinging from my VM to R1 bridge works but not to the bridge of R2.
I’m stuck here, any help is appreciated.
Thanks in advance

Hello @Seba, can you provide command output sudo ip route get 239.0.0.1 from both R1 and R2? I think problem with route.

Sure here it goes:

R1: multicast 239.0.0.1 via 10.0.2.2 dev eth2 src 10.0.2.1
cache

R2: multicast 239.0.0.1 via 10.0.2.1 dev eth2 src 10.0.2.2
cache

And provide please output of command sudo ip link show dev tun1
Did you configure IGMP proxy?

Sure here it goes:
R1:
8: tun1@NONE: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1462 qdisc pfifo_fast master br1 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether b6:61:e2:8a:6c:8c brd ff:ff:ff:ff:ff:ff

R2:
8: tun1@NONE: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1462 qdisc pfifo_fast master br1 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 22:54:ab:fa:2a:02 brd ff:ff:ff:ff:ff:ff

I don’t have a IGMP proxy configured in either of both routers

Ok, I think it must work. I created lab and tested that.


You should capture traffic for detailed research this problem.
At first on R2 sudo tcpdump -n -i tun1 then on R1 sudo tcpdump -n -i tun1. You can also use filter for tcpdump like host 239.0.0.1 with your multicast address
ps:// On my test server and client I added routes
ip route add 224.0.0.0/4 dev e0

I can’t get it to work, what i don’t understand is why i can ping from bridge to bridge between routers and from client to bridge in R1, but not from client to bridge in R2. If i can’t get to do that i will never be able to get multicast through the tunnel because that means that the client can’t get to the network from R2, am i correct?

I think not quite. Can you draw network map on draw.io with all ip addresses (e.g client ip, server ip, multicast group). Did you have additional firewall rules? Did you saw multicast on R2 br1 and tun1?

Here it goes:

As i said, pinging between bridges work, and between client and R1 bridge also. Ping from client to R2 bridge doesn’t work. If i play some multicast on VLC in my host machine and listen with tcpdump -i br1 on my R2 i can see the mcast traffic.

Here are both VyOS configs:
R1:
bridge br1 {
address 10.128.0.248/16
aging 300
description “BRIDGE TO OUT”
hello-time 2
max-age 20
priority 0
stp false
}
ethernet eth1 {
bridge-group {
bridge br1
}
description “INT TO SEBA94”
duplex auto
hw-id 08:00:27:78:2a:5b
smp_affinity auto
speed auto
}
ethernet eth2 {
address 192.168.200.88/24
description R2R
duplex auto
hw-id 08:00:27:de:41:93
smp_affinity auto
speed auto
}
ethernet eth3 {
address 11.11.11.2/24
description PUTTY-CONNECTION
duplex auto
hw-id 08:00:27:91:d0:2f
smp_affinity auto
speed auto
}
loopback lo {
}
tunnel tun1 {
encapsulation gre-bridge
local-ip 192.168.200.88
multicast enable
parameters {
ip {
bridge-group {
bridge br1
}
}
}
remote-ip 192.168.200.89
}

R2:
bridge br1 {
address 10.128.0.220/16
aging 300
description “BRIDGE TUN1-ETH1”
hello-time 2
max-age 20
priority 0
stp false
}
ethernet eth1 {
bridge-group {
bridge br1
}
description “INT TO MCAST”
duplex auto
hw-id 08:00:27:1c:97:76
smp_affinity auto
speed auto
}
ethernet eth2 {
address 192.168.200.89/24
description R2R
duplex auto
hw-id 08:00:27:41:1b:a1
smp_affinity auto
speed auto
}
ethernet eth3 {
address 12.12.12.2/24
description PUTTY-CONNECTION
duplex auto
hw-id 08:00:27:a3:15:e1
smp_affinity auto
speed auto
}
loopback lo {
}
tunnel tun1 {
encapsulation gre-bridge
local-ip 192.168.200.89
multicast enable
parameters {
ip {
bridge-group {
bridge br1
}
}
}
remote-ip 192.168.200.88
}

And route tables:
R1:
10.128.0.0/16 dev br1 proto kernel scope link src 10.128.0.248
11.11.11.0/24 dev eth3 proto kernel scope link src 11.11.11.2
127.0.0.0/8 dev lo proto kernel scope link src 127.0.0.1
192.168.200.0/24 dev eth2 proto kernel scope link src 192.168.200.88

R2:
10.128.0.0/16 dev br1 proto kernel scope link src 10.128.0.220
12.12.12.0/24 dev eth3 proto kernel scope link src 12.12.12.2
127.0.0.0/8 dev lo proto kernel scope link src 127.0.0.1
192.168.200.0/24 dev eth2 proto kernel scope link src 192.168.200.89

Ok, let’s try solve first problem with icmp Client and R2 bridge ip address. Enable promiscuous mode on virtualbox. Then run ping 10.128.0.220 from Client and capture traffic with tcpdump on R2 tun1.

No response from R2, even with promiscuous mode on

Ok, continue ping from client and capture tun1 on R1. Do you see packets?

No packets captured from tun1 on R1 when ping from client

Do you see icmp packets on R1 eth1 and br1?

I rebooted everything and now i can ping from client to br1 on R2 with response! even to different ip’s from the 10.128.0.0 network, now i have to configure the routers to pass multicast in any other way?

I think you need add multicast route only on client. But if its won’t work, you need also add ip address on server, net 10.128.0.0/16

I couln’t get the multicast to work. I added the 224.0.0.0/4 route to the enp0s9 client interface, and set igmp-proxy on both routers and still can’t get it.

R1:
protocols {
igmp-proxy {
interface eth1 {
role downstream
}
interface eth2 {
role upstream
}
}
}

R2:
protocols {
igmp-proxy {
interface eth1 {
alt-subnet 172.30.153.0/24
role upstream
}
interface eth2 {
role downstream
}
}
}

i don’t know if that should be enough or i need to add multicast route to the routers also

Delete igmp-proxy on both R1 and R2, and show multicast few packet captured with tcpdump on R2 br1. Also check multicast on R2 tun1.

i can only get captured multicast on br1 of R2 if i request it with VLC from the host machine. Nothing if i do it from the client. And no traffic on tun1

Can you show packet your multicast? Which src ip address server send to multicast group?