Office to AWS hub/spoke


#1

Hi All,

I have successfully set up a VPN network in AWS using VyOS in a hub/spoke topology, but I’m having some trouble routing through this completely from office VPN connections. Here’s a little explanation of how it works before I describe the problem.

I am using two regions (us-east-1 and us-west-2) with plans to eventually expand to a third. Each region has a “Core” VPC. Each Core VPC has two VyOS instances (a and b). Each VyOS instance has an ipsec VPN tunnel to both the a and b instances in the other region. This makes for four total connections between the two regions. As more regions are added, each “Core” VyOS instance will have tunnels added to each a and b instance in the new regions. This may become full mesh, or it may only be partial mesh. I don’t have to decide yet with only two regions.

In each region, there are multiple VPCs (think qa, staging, production, etc.). Each VPC also has a pair of VyOS instances (a and b), which connect to the Core VyOS instances in the same way the Core instances are connected (four total connections per VPC VPN connection). This is what makes the hub/spoke part of the topology. Each VPC within a region can connect to any other VPC within the region by going through the Core VPC. And likewise, any VPC can connect to any VPC between regions by going through the Core VPC in its own region, to the Core VPC of the other regions, then to the destination VPC. Routes are all broadcast via BGP.

Every VyOS instance is configured to get routes from the “A” instances. If all instances are up and functioning, all traffic will go through these primary instances. Every instance is running a monitoring script that will detect if its partner instance has gone down. If it does, the good instance will update the appropriate AWS routing table with itself in place of the broken instance (assuming the bad one was the primary at the time). When the failed instance comes back, if it is primary, it resets the AWS routing table back to itself. BGP will already take care of fixing the routes if a primary link fails.

All of this works very well, but now I want to have connections to the office. The idea was to have AWS VPN connections to each “Core” VPC, not to the VyOS instances, but an actual AWS VPN connection. Once those are set up, I want to be able to route traffic to and from the office, through the Core VPC VyOS instances, and to destination spoke VPCs. I am not in control of the office firewall, so all I can do is ask the our network engineer to do basic things, some of which he may or may not want to do. I have a VPN connection set up as a test to one of the Core VPCs. The VyOS instances in that VPC can communication to and from the office, but since virtual gateways don’t actually have IPs, I have nothing to put in the config on the VyOS side as a next hop to broadcast via BGP to the spoke VPC VyOS instances.

Has anyone ever figured out a way to route through an office/AWS VPN connection, through a hub/core VyOs instance to a spoke VyOS instance? Or am I the only one who has ever actually wanted to try something like this? Obviously I could just create VPN connections to every VPC, but we have a lot of them, and it would get expensive.

Here is an example of one of the “A” Core VyOS instances’ config:

interfaces { ethernet eth0 { address dhcp duplex auto hw-id ************** smp_affinity auto speed auto } loopback lo { } vti vti0 { address 169.254.20.2/30 description "Oregon to Virginia Tunnel 1" mtu 1436 } vti vti1 { address 169.254.20.10/30 description "Oregon to Virginia Tunnel 2" mtu 1436 } vti vti10 { address 169.254.40.1/30 description "Hub to OR-Dev Tunnel 1" mtu 1436 } vti vti11 { address 169.254.40.9/30 description "Hub to OR-Dev Tunnel 2" mtu 1436 } } protocols { bgp 65201 { neighbor 169.254.20.1 { description "Oregon to Virginia Tunnel 1" remote-as 65101 soft-reconfiguration { inbound } timers { holdtime 30 keepalive 30 } } neighbor 169.254.20.9 { description "Oregon to Virginia Tunnel 2" remote-as 65101 soft-reconfiguration { inbound } timers { holdtime 30 keepalive 30 } } neighbor 169.254.40.2 { description "Hub to OR-Dev Tunnel 1" remote-as 65202 soft-reconfiguration { inbound } timers { holdtime 30 keepalive 30 } } neighbor 169.254.40.10 { description "Hub to OR-Dev Tunnel 2" remote-as 65202 soft-reconfiguration { inbound } timers { holdtime 30 keepalive 30 } } network 10.10.0.0/17 { } network 10.186.0.0/16 { } parameters { bestpath { compare-routerid } } } static { route 10.186.0.0/16 { next-hop 10.186.0.1 { distance 10 } } } } service { snmp { community ****** { authorization ro } } ssh { disable-password-authentication port 22 } } system { config-management { commit-revisions 20 } console { device ttyS0 { speed 9600 } } host-name ***************** login { user vyos { authentication { encrypted-password **************** plaintext-password **************** public-keys ********* { key **************** type ssh-rsa } } level admin } } ntp { server 0.pool.ntp.org { } server 1.pool.ntp.org { } server 2.pool.ntp.org { } } package { auto-sync 1 repository community { components main distribution helium password **************** url http://packages.vyos.net/vyos username "" } } syslog { global { facility all { level notice } facility protocols { level debug } } } time-zone UTC } vpn { ipsec { esp-group AWS { compression disable lifetime 3600 mode tunnel pfs enable proposal 1 { encryption aes128 hash sha1 } } ike-group AWS { dead-peer-detection { action restart interval 15 timeout 45 } ikev2-reauth no key-exchange ikev1 lifetime 28800 proposal 1 { dh-group 2 encryption aes128 hash sha1 } } ipsec-interfaces { interface eth0 } nat-traversal disable site-to-site { peer <Spoke VyOS A IP> { authentication { id <Local public IP> mode pre-shared-secret pre-shared-secret **************** remote-id <Spoke VyOS A IP> } description "Hub to OR-Dev Tunnel 1" ike-group AWS local-address 10.186.0.129 vti { bind vti10 esp-group AWS } } peer <Spoke VyOS B IP> { authentication { id <Local public IP> mode pre-shared-secret pre-shared-secret **************** remote-id <Spoke VyOS B IP> } description "Hub to OR-Dev Tunnel 2" ike-group AWS local-address 10.186.0.129 vti { bind vti11 esp-group AWS } } peer <Virginia VyOS Hub A IP> { authentication { id <Local public IP> mode pre-shared-secret pre-shared-secret **************** remote-id <Virginia VyOS Hub A IP> } connection-type initiate description "Oregon to Virginia Tunnel 1" ike-group AWS ikev2-reauth inherit local-address 10.186.0.129 vti { bind vti0 esp-group AWS } } peer <Virginia VyOS Hub B IP> { authentication { id <Local public IP> mode pre-shared-secret pre-shared-secret **************** remote-id <Virginia VyOS Hub B IP> } connection-type initiate description "Oregon to Virginia Tunnel 2" ike-group AWS ikev2-reauth inherit local-address 10.186.0.129 vti { bind vti1 esp-group AWS } } } } }

All of the other “Core” VyOS instance configs are similar, just with different IPs configured. The spoke VyOS instance (OR-Dev VPC as referred to in the config above) is similar, but of course with only two VPN tunnels (each one connects to each of the two “Core” instances in its region).

As you can see, there is an extra network being broadcast over the BGP (10.10.0.0/17). That is the office network. The piece that’s missing is a static route for that connection. What I can’t figure out is what to use for the next hop, since, like I said, an AWS Virtual Private Gateway has no IP address that you can just hard code.

If anyone is confused by my description, I would be happy to clarify or even draw a diagram of the topology. Any help would be much appreciated. This is all still proof-of-concept and not in full use yet, so I can change configs however necessary. Thanks.