I am running 1.4-rolling-202307220317 in a VM on proxmox with dedicated 10G SRIOv virtual interfaces presented to vyos through proxmox.
I have main connectivity working with dual stack and testing other features like wireguard and containerization. Given the ansible automation, I am looking at standardizing my deployments on vyos.
I have many instances deployed in various DC locations and all working fine.
The last instance I have waiting to move from OpnSense to vyos is on my cable connection which is subject to bufferbloat due to modem buffering.
As my bufferbloat report is B, I started looking at some qos shaper configuration but I have some questions:
I am unclear on the difference between applying the QOS configuration to the ethernet interface vs the ifb interface. Unless I am missing something, the vyos documentation only points to the linux documentation for IFB without much more details applicable to vyos. Are both configuration models supported: 1- applying the shaper to an ifb after a redirect from the WAN interface 2- applying the shaper directly to the interface? I read that the ifb is recommended but I am not sure why.
When it comes to verifying the qos configuration implementation by vyos. I cannot see the âshow queueingâ command that some people refer to online, and tc seems to be the only way to do some debugging. Is that correct in 1.4 or am I missing something?
do I need a shaper on egress only for the WAN or also for the LAN? If so, is it done through 2 different ifb interfaces?
For reference, my shaper configuration for my 1.5Gbps/50Mbps service that gives me an A instead of B is below, however I am still missing low latency for gaming and I cannot seem to find out why especially as I am still trying to understand how to debug using tc.
If you dont overuse a link then it doesnt matter if you got a 64kbyte or a 64Gbyte buffer because the packet will be buffered anyway (store-and-forward) and then forwarded directly since the link is available.
âBufferbloatâ only have a sideeffect for realtime transmissions such as VoIP where its prefered to drop the packet if the link have no room (in order to make room) rather than occupy space and have the packet being delivered late (since most codecs have a decoding window of about 30ms before they start to extrapolate missing packets - with that being said you can normally accept for a 300ms latency end to end before the people in front of the microphones starts to get collissions when speaking aka noticing the delay).
So with that being said if you got a asymetric link such as 1500Mbps down and 50Mbps up (sounds a bit extreme to me since you have a 50:1 ratio of acks being sent in the opposite direction when you download stuff) my best recommendation is NOT to throttle the upstream bandwidth but rather prioritize outgoing ACKâs over everything else as your 1st choice.
And 2nd to that implement an egress shaper that will buffer traffic to not overrun your ISPâs upstream from you towards the internet (since they will probably just doing RED or WRED which is a dirty but CPU efficient way to shrink the outgoing queue and when your ACK packets are randomly thrown away by your ISP your download speed will suffer).
I havent yet looked into how how to setup an egress shaper that prioritizes outgoing ACKâs in VyOS.
QoS (shaping) can only be applied in outgoing direction on an interface.
IFB is a way around this. See ifb as a virtual interface, where incoming traffic is redirected to.
This way shaping in inside direction is possible.
For simple setup (single LAN and WAN interface), thereâs no use for ifb.
Outgoing shaper goes on WAN interface, incoming on LAN.
And for sure, buffer bloat isnât a myth. On DSL and cable modems, filling up the upstream resulted in populating the FIFO queue in the modem with over 500msec worth of packets , ping times going through the roof.
Thanks a lot @16again for the clarification on the value of using an ifb interface. I knew shaper was only on egress (my time from working with Cisco), nice to see people leverage Linux to extend features.
Funny that the first response I got was to discuss a service I cannot design myself or influence in terms of upstream/downstream ratio pushing the chance of hitting the bufferbloat issue.
@16again: Ping times goes out the roof because you oversubscribed the link.
For a data transmission point of view the only time buffering isnt something you like is in realtime transmissions such as VoIP traffic - in any other situation you want the packet to be buffered rather than dropped.
@talyos: That is because the fix in your case with a massive asymetric link such as 1500Mbps down and 50Mbps is to prioritize outgoing ACKâs. This way even during load your downloadspeed will be kept at 1500Mbps instead of being throttled to something lower because your ISP will drop packets which your TCP-sessions will be unhappy of (due to RED/WRED being used by the ISP to drop packets in order to shrink queues).
In the end, oversubscription is causing huge ping times. But only in combination with the way too large buffer in the ISP modem.
Buffering is good, but donât over-buffer. It causes delay, and still lots of packets will be dropped.
Try typing in a SSH session with oversized buffer. Or opening a wevsite, with multiple dns lookup, 3-way handshakes , cert exchanges etc.
Some of us do believe in buffer bloath âmythâ, mostly because they felt ill effects.
QoS, shaping, traffic classes, fairness (SFQ) can really improve things.
Again buffersize doesnt affect normal non oversubscribed traffic so its a hoax that buffers are bad.
Only specific traffic case where buffers to some extend are bad are for realtime traffic such as VoIP where you would prefer to rather drop the packet in order to make room on the link than to queue it and make sure its delivered.
And the solution to a asymetric connection such as the one the thread starter have is to prioritize outgoing ACK packets so they dont get dropped by the ISP with RED/WRED policing which most ISPs use (since the ISP have too small buffers).
Again buffersize doesnt affect normal non oversubscribed traffic so its a hoax that buffers are bad.
I donât think we said that buffers are bad. It is more of the effect of them being filled up in some scenarios where the protocols end up being affected beyond just VOIP.
UDP is used in more scenarios that you seem to imply and would be affected. Even QUIC being UDP like might be affected, I have never looked in details enough though to validate my statement but from the user impact in the household, UDP was definitely affected.
Maybe I am misinterpreting your comment on the ISP RED/WRED policing but I donât this come in play since we are referring to a policy that controls the traffic from the vyos instance (aka CPE) to the cable modem to avoid the buffers of that modem to be filled up. I am not aware of the cable modem to implement RED/WRED.
QUIC just like HTTP(S) over TCP would prefer to have the packets delivered rather than randomly dropped which will happen if you have too small buffers.
Same goes with TFTP who prefers to have packets delivered rather than randomly dropped.
Only cornercase who doesnt like buffers larger than bare minimum is realtime traffic such as VoIP where anything thats being delayed more than approx 30ms will be dropped on the destination anyway since the decoder have progressed and already extrapolated the missing bits. Which means that the packet being queued and delayed only took space from another packet on the wires.
The effect you are seeing is most likely the ACK packets being randomly dropped by your ISP to free up the egress queues upstream. Even if for example QUIC uses UDP it only means that QUIC must deal with the equal to TCP ACK packets (which is why for example Google favours QUIC because pushing out an updated version of Google Chrome to deal with algorithms and better congestion control to be able to push more packets onto the wire is easier and way faster than to wait for the operating systems around the world to tweak their TCP/IP stacks to do the same with TCP-based applicationprotocols).
Yeah they dont understand that there is a higher latency between their wireless mouse and computer before stuff is drawn to the screen than that this computer have as latency towards the server
I have tried to play with the bufferbloat shapers on both OpnSense and VYOS and it did not make any difference.
I had issues during early time of COVID as both the ISP network and the home network were congested with a previous generation of modem. I upgraded to a new modem last year and it seems to have reduce the scenarios where there is a need for actual shaper to be applied.
The shapers I applied were too aggressive and got me a B or C rating depending on the configurations I selected.
I would say that for now, I am not going to play with that since my setup does not seem to require this.
Given that I have a box with plenty of buffers sitting around and this site brings me:
BUFFERBLOAT GRADE
A
Your latency increased slightly under load.
Read about how to fix bufferbloat issues.
LATENCY
Unloaded
3 ms
Download Active
+4 ms
Upload Active
+17 ms
SPEED
â Download
318.5 Mbps
â Upload
236.5 Mbps
I wouldnt trust the results too much (when it comes to the grade stuff).
On the other hand my internet connection is symmetric and the limit in upload/download is due to the hardware my current firewall is using.