VyOS Network Emulator/Limiter on VMWare ESXi

I am trying to limit the bandwidth between two networks, “management” and “production” which are just two separate /24’s. I want to be able to limit the bandwidth all the way down to 10 mbit/s.

I am on VMWare, I have two linux hosts, one in each network attached to each network’s vSwitch. In the middle I have a VyOS 1.14 router with two VMXNET3 interfaces, eth0(management) and eth1(production).

I have set the speed to both interfaces at 10 with:

set interfaces ethernet eth0 speed 100
set interfaces ethernet eth1 speed 100

as well as tried to define two different types of outbound shaper/network-emulator policies to govern, as well as an inbound limiter policy.

set traffic-policy network-emulator WAN-EMU
set traffic-policy network-emulator WAN-EMU bandwidth 100mbit
set traffic-policy shaper SHAPER
set traffic-policy shaper SHAPER bandwidth 100mbit
set traffic-policy shaper SHAPER default bandwidth 100mbit
set traffic-policy shaper SHAPER default ceiling 100%
set traffic-policy shaper SHAPER default queue-type fair-queue
set traffic-policy limiter LIMITER
set traffic-policy limiter LIMITER default bandwidth 100mbit

I have also applied these accordingly:

set interfaces ethernet eth0 traffic-policy out WAN-EMU
set interfaces ethernet eth0 traffic-policy out SHAPER
set interfaces ethernet eth0 traffic-policy in LIMITER

set interfaces ethernet eth1 traffic-policy out WAN-EMU
set interfaces ethernet eth1 traffic-policy out SHAPER
set interfaces ethernet eth1 traffic-policy in LIMITER

I know I can only apply one outbound policy at a time, but I tried both ways and neither worked.

I set up an iperf server on the management linux server, ran iperf client on the production linux server… and I get 460MB/s across.

What am I doing wrong?

Note:
This is a nested ESXi environment. the vSwitch does not have promiscuous mode on, but the routing works just fine so I don’t understand what the problem could be.

If you are using VMXNET3 adapters you can limit it directly at the hypervisor level.

But I can’t use any of the features of VyOS such as wan emulation… shaping?

yes you can.
Below immediately cranks up ping times to eth0 over here from <1ms to 50ms

set traffic-policy network-emulator MY network-delay 50
set interfaces ethernet eth0 traffic-policy out MY

This doesn’t really solve my problem. I’m wondering if there’s a pre-requisite for VyOS QoS on VMWare that I haven’t fulfilled…

Would the fact that this VMWare environment is nested impact the capability? When I iperf across the two subnets, I am still able to observe 400 MB/s transfer speed.

If you are using VMXNET3 adapters you can limit it directly at the hypervisor level.

@c-po this isn’t really what I’d like to do. I’m aware there are vSphere modifications that can be made, but not with the same level of control that could theoretically be provided by the network-emulator traffic policy class.

Is this just currently non-functional? No matter how I apply these policies, it seems to not affect the throughput whatsoever. iperf still shows 350-450 MB/s across.

Hi @ckuperst,

I am not aware of any QoS limitation at the moment with VyOS 1.2 and 1.3. I by myself have a shaper that shapes outbound traffic to 98 MBit/s

set interfaces pppoe pppoe0 traffic-policy out 'mnet-out'
set traffic-policy shaper mnet-out bandwidth '98mbit'
set traffic-policy shaper mnet-out default bandwidth '100%'
set traffic-policy shaper mnet-out default burst '15k'
set traffic-policy shaper mnet-out default queue-limit '1000'
set traffic-policy shaper mnet-out default queue-type 'fq-codel'

Both netem and shaper work OK here, see below. If i

set traffic-policy network-emulator NETEM100 bandwidth 100000
set interfaces ethernet eth0 traffic-policy out NETEM100
[ 4] 0.00-10.00 sec 113 MBytes 94.4 Mbits/sec 0 sender

set traffic-policy shaper SHAPER
set traffic-policy shaper SHAPER bandwidth 100mbit
set traffic-policy shaper SHAPER default bandwidth 100mbit
set traffic-policy shaper SHAPER default ceiling 100%
set traffic-policy shaper SHAPER default queue-type fair-queue
set interfaces ethernet eth0 traffic-policy out SHAPER
[ 4] 0.00-10.00 sec 114 MBytes 95.6 Mbits/sec 0 sender

When applying a policy (run sudo tc monitor in different window) , I noticed tc using hardware queue, (mq) , maybe your hardware behaves differently

vyos@vyos:~$ sudo tc monitor
deleted qdisc tbf 1: dev eth1 root rate 100Mbit burst 15Kb lat 50ms
deleted qdisc mq 0: dev eth1 root
qdisc htb 1: dev eth1 root refcnt 2 r2q 63 default 0x2 direct_packets_stat 0 direct_qlen 1000
class htb 1:1 dev eth1 root prio 0 rate 100Mbit ceil 100Mbit burst 1600b cburst 1600b
class htb 1:2 dev eth1 parent 1:1 prio 0 rate 100Mbit ceil 100Mbit burst 15337b cburst 1600b
qdisc sfq 800c: dev eth1 parent 1:2 limit 127p quantum 1514b depth 127 divisor 1024

@16again

This was after applying

set interfaces ethernet eth1 traffic-policy out DOWNLOAD-POLICY

and committing.

I have a bad feeling there’s something with how this environment (VMWare vSphere) is configured that is affecting this behavior. Are there any things I need to look out for and verify with the vSwitch/VM configurations to ensure I am getting the correct NIC and driver behavior?

Problem solved! enabled promiscuous mode on the vSphere switches and all worked out as expected. thanks!

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.