I am trying to limit the bandwidth between two networks, “management” and “production” which are just two separate /24’s. I want to be able to limit the bandwidth all the way down to 10 mbit/s.
I am on VMWare, I have two linux hosts, one in each network attached to each network’s vSwitch. In the middle I have a VyOS 1.14 router with two VMXNET3 interfaces, eth0(management) and eth1(production).
I have set the speed to both interfaces at 10 with:
set interfaces ethernet eth0 speed 100
set interfaces ethernet eth1 speed 100
as well as tried to define two different types of outbound shaper/network-emulator policies to govern, as well as an inbound limiter policy.
set traffic-policy network-emulator WAN-EMU
set traffic-policy network-emulator WAN-EMU bandwidth 100mbit
set traffic-policy shaper SHAPER
set traffic-policy shaper SHAPER bandwidth 100mbit
set traffic-policy shaper SHAPER default bandwidth 100mbit
set traffic-policy shaper SHAPER default ceiling 100%
set traffic-policy shaper SHAPER default queue-type fair-queue
set traffic-policy limiter LIMITER
set traffic-policy limiter LIMITER default bandwidth 100mbit
I have also applied these accordingly:
set interfaces ethernet eth0 traffic-policy out WAN-EMU
set interfaces ethernet eth0 traffic-policy out SHAPER
set interfaces ethernet eth0 traffic-policy in LIMITER
set interfaces ethernet eth1 traffic-policy out WAN-EMU
set interfaces ethernet eth1 traffic-policy out SHAPER
set interfaces ethernet eth1 traffic-policy in LIMITER
I know I can only apply one outbound policy at a time, but I tried both ways and neither worked.
I set up an iperf server on the management linux server, ran iperf client on the production linux server… and I get 460MB/s across.
What am I doing wrong?
Note:
This is a nested ESXi environment. the vSwitch does not have promiscuous mode on, but the routing works just fine so I don’t understand what the problem could be.
This doesn’t really solve my problem. I’m wondering if there’s a pre-requisite for VyOS QoS on VMWare that I haven’t fulfilled…
Would the fact that this VMWare environment is nested impact the capability? When I iperf across the two subnets, I am still able to observe 400 MB/s transfer speed.
If you are using VMXNET3 adapters you can limit it directly at the hypervisor level.
@c-po this isn’t really what I’d like to do. I’m aware there are vSphere modifications that can be made, but not with the same level of control that could theoretically be provided by the network-emulator traffic policy class.
Is this just currently non-functional? No matter how I apply these policies, it seems to not affect the throughput whatsoever. iperf still shows 350-450 MB/s across.
I have a bad feeling there’s something with how this environment (VMWare vSphere) is configured that is affecting this behavior. Are there any things I need to look out for and verify with the vSwitch/VM configurations to ensure I am getting the correct NIC and driver behavior?