Wireguard config and status R2
show interfaces wireguard
vyos@R1:~$ show interfaces wireguard
Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down
Interface IP Address S/L Description
wg0 10.3.3.3/24 u/u VPN-GW-GRM
show interfaces wireguard wg0
vyos@R1:~$ show interfaces wireguard wg0
wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.3.3.3/24 brd 10.3.3.255 scope global wg0
valid_lft forever preferred_lft forever
inet6 fe80::f50d:24ff:fe84:3e68/64 scope link
valid_lft forever preferred_lft forever
Description: VPN-GW-GRM
Wireguard config and status R1
show interfaces wireguard
vyos@GW-MAIN:~$ show interfaces wireguard
Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down
Interface IP Address S/L Description
wg0 10.3.3.1/24 u/u
show interfaces wireguard wg0
yos@GW-MAIN:~$ show interfaces wireguard wg0
wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.3.3.1/24 brd 10.3.3.255 scope global wg0
valid_lft forever preferred_lft forever
inet6 fe80::f85c:68ff:fedc:27f5/64 scope link
valid_lft forever preferred_lft forever
VMware? Or what hypervisor/solution are you using? Can you confirm those ARP packets are making it from R2 to the next upstream switch or vSwitch?
If it is VMware just for giggles go into the vswitch port for the VM and turn on promiscuous mode. I had problems in the past trying to run GNS3/EVE-NG through a VM because it required promiscuous mode.I wasn’t able to bridge the simulated devices on to the network proper. Sounds almost like the same thing here
I use Ovirt/KVM. ARP packets works fine between R1 and R2.
However, tcpdump result on the network interface of the virtualization host that is used to communicate with R2 to the VM does not detect the same ARP messages. I also think that the problem is at the virtualization level