TCP-slicer proxy

Hi, everyone!

Does VyOS have an option of “slicing” one single TCP-connection to many concurrent ones?

For instance:

  1. The server prefetches 3KB of data, mostly HTTP resources.
  2. The server send to client 3KB of data, instead of traditional HTTP/SOCKS proxy, the server open multithreaded transfer with 3 connections, send 1KB of data per thread to each connection
  3. The client receives 1KBx3, and combine them to the original 3KB data, and return as a local HTTP proxy server.
  4. The client displays the original data in the browser via the local HTTP/SOCKS proxy
    The latency is not important as long as the transfer rate is good.

Thx!

@Muaddib I really don’t understand the question.
VyOS is a routing platform.
What you might talk about is some kind of HTTP Proxy.
I have not tried to use proxy in VyOS for a very long time now but there is no "slicing’ one TCP-connection to many concurrent ones in any book or RFC of TCP.
The only thing that does that is SCTP in I do not know about a server on the internet that utilizes SCTP.
Also, there is no real reason to do this kind of thing with HTTP proxy if the connection is fast enough, 3KB on a 1Mbit line should not take an hour.

Can you give a scenario which will make sense of this?

Satellite connection shapes every single TCP-connection up to 5 Mbps, and since almost every application communicates with server-side via 1 TCP-connection, hence very low speed…And the satellite link itself may carry out speed up to 70 Mbps though.

There is such a thing called PEP (Performance Enhancing Proxy), which does the thing of splicing/splitting 1 single TCP-connection to several concurrent, resulting in a great speed up.
For example, Xipling-10K TCP-accelerator which is an iPEP

I was thinking of some software solution to be installed on my remote server.

You can use the: server_persistent_connections off option in squid.conf.
If there is no option to configure it via VyOS cli then it should be added somehow.
Take a peek at:
http://www.squid-cache.org/Doc/config/server_persistent_connections/

this should do the trick.
What you are talking about is a very unique scenario but there are very simple solutions for this scenario.

I use shadowsocks proxy with muxing enabled (mux=1), so client establishes many concurrent connections with server, but it doesn’t split outgoing traffic between client-server connections…so I seriuously doubt the Squid will do.

Thanks for help, though!

There is a difference between creating a tunnel and route per packets over each of the tunnels and disabling reverse path filtering.
As far as squid can go it can strictly do not pipeline single HTTP requests on the same TCP/HTTP connection.
Means that each HTTP request will be passed over a new connection to the server.
If you really want to do more then that you will need to have two separated nodes.
One in the satellite connection site and another somewhere on the internet.
Then you can connect them using some kind of tunneling protocol on multiple connections for example you can use wireguard with let say 10 or 20 endpoints and then on the routing platform use ecmp routing over these end points.
Just to illustrate:
let say the SAT GW is a box and you will have a linux box behind the SAT GW you will have on the linux box and on the other site ie DC(DataCenter) a node with wg0-20 interfaces.
On the DC node you will listen on 20 ports ie 53001-53020 each a single wireguard listener.
On the DC site you will assign a /30 subnet ie wg0 100.10.10.1/30 and on the SAT side you will assign 100.10.10.2/28.
on wg1 you will assign 100.10.10.4/30 and on the SAT side 100.10.10.5/30 (correct me if i’m wrong about the ipcalc of the cidr aspect)
etc… each of the wg1-20
Then on each end you will define a static route towards the relevant networks in ecmp format with weight 1.
from the DC side you will need a specific destination ip or destination network let say 192.168.1.0/24
and on the SAT side you will need to use a default route ie:

ip route add default nexthop via 100.10.10.1 weight 1 nexthop via 100.10.10.4 weight 1 nexthop via 100.10.10.6 weight 1

This will tunnel traffic over multiple UDP tunnels of wireguard and will have lower mtu then 1500 and on sat it usually less then 1500 by default so it would be about 1300 mtu.(wg default is 1420)

There are other options and solutions for your scenario and I do understand that but you really need another side in a DC so you would have full control on the connections from the SAT to the world and on the transport layer.
It’s not a simple setup but rather pretty straight forward.

There do exist ECMP (equal cost multipath) but with that a single session will always egress the same physical path because its a nightmare if/when the packets arrives out-of-order.

A single session is defined by the combination of 5-tuple (protocol + srcip + dstip + srcport + dstport). Similar as can be configured for a linkaggregation either static or dynamic (LACP).

Normally applications (including vpn-solutions) can accept up to something like 32-100 packets out-of-order (that is packet no 100 arrives before packet no 1 at the destination) but anything above that threshold will often be discarded.

In VyoS there do exist a set load-balancing wan rule 1 per-packet-balancing which might be used in combination with encryption and/or cleartext tunneling. But again per-packet-balancing should almost always be avoid at all cost.

There are some propertiary systems used within satcom and cellular communications (https://www.peplink.com/ and https://www.celerway.com/ are two vendors that comes to mind) that basically tags each packet being sent and by that on the receiving end be able to combine the traffic so once forwarded the packets are again in order even if the router at destinaton received them heavily out-of-order.

They can also be configured for a availability point of view instead of performance. That is packets are duplicated through all available links and the receiving router will then only forward one copy of each packet and in order towards the application located behind the destination router.

When it comes to multipathing specially over satcom and cellular links something that is also popular is to utilize FEC - forward error correction. This way a certain amout of each packet is used for ECC so that if a packet arrives broken it can still be reconstructed to its original form and forwarded to the destination application. Same goes with interleaving of packets so even if a few packets are physically missing at the destination the destination router can still reconstruct the missing packets and forward to the destination application.

All these features would be nice if they somehow could be implemented in future versions of VyOS :slight_smile:

I don’t know if you have seen peplink implementation but it’s not doing any better than what I have suggested.
The “best” thing you could add to the picture is a TCP intercept proxy on the other side of the connection that will be ready in the OS level to accept packets in a non ordered fashion and to reconstruct the TCP connection against the other peer of the connection.
It’s so simple to implement that and I do not see any real value with peplink or celerway products else then “simplicity”. If you are willing to pay them for both a SDWAN and a EndPoint Router then it you would be able to give us any of the relevant details which are efficientcy, simplicity and costs.

Its so simple to implement yet very uncommon out there so there are a few dedicated companies doing just that?

The point here is that the aggregation of multiple paths that these products can do (either for availability that is packets are duplicated over available paths or performance where packets are splitted over available paths) does the “TCP-slicing” the original thread seems to talk about.

That is:

client ↔ magic-router (split up to multiple paths) ↔ internet with multiple paths (diffent ISP, different transmissions like cable, wifi, cellular, satcom) ↔ magic-router (combine multiple paths and forward as a single stream just like the client originally sent it) ↔ server

I would love to see this feature show up in VyOS if its so easy to implement :slight_smile:

My guess is that these companies have taken some opensource project and rebranded it since their OSes often are just that for example rebranded OpenWRT.

Thanks, but as I’ve figured it out, it only works, when you have several physical WAN-connections for load-balancing.
While I have only one physicall connection to satellite terminal, that shapes every TCP-connection I establish…

That’s why I want to outsmart SAT-hardware and split original incoming and outgoing traffic to many TCP-connections between my proxy-client and proxy-server. Proxy server would be installed on some VPS with normal regular land Internet, of course :slight_smile:

I don’t find it difficult to implement, since every packet can be tagged (numbered) and then after some retries it’d be delivered one way or another and then combined with the others in its order. Like utorrent download a file piece by piece, checks checksum of every piece, add it to other previous pieces comprising the desired file.

Latency would be increased drastically, clearly.

As for hardware, I discovered Xiplink XA-10K, that does the thing, but the price of $5k per item (and you need 2 of them)…whew

A workaround would perhaps be to set two VyOS in serial?

The first connected to your satcom have one interface to the satcom and then two interfaces to that 2nd VyOS.

Lets say eth1: 169.254.1.1/24 and eth2: 169.254.2.1/24.

Then on that 2nd VyOS you will now have two physical interfaces and can configure than as “WAN1” and “WAN2” and apply the wan load-balancing between them?

That is 2nd VyOS would have eth1: 169.254.1.2/24 and eth2: 169.254.2.2/24 and two default routes one with nexthop 169.254.1.1 and the other towards 169.254.2.1.

Per-packet-balancing would of course not change how satcom see your traffic but if you can have a VyOS on the receiving end then perhaps you can use wireguard or similar to establish two tunnels and then do per-packet-balancing between them?

That is on that 1st VyOS you have one wireguard tunnel for eth1 and another for eth2 - both egressing eth0 towards satcom. This way you get two sessions. Would probably be able to do this without involving a 2nd VyOS on that first site.

That is you setup multiple wireguard tunnels between site A and B and then you do routing between the different tunnels and enable per-packet-balancing (or at least per flow balancing since per packet might have issues with out-of-order packets unless properly buffered on receiving end).

1 Like

You do understand that writing a proxy is not such a “complex” task right?
These companies write their own software and maybe will use the Linux kernel but they have their own patches and their own pieces of software which they developed.
I merely suggested an option that can work on Any linux distro which have nftables and wireguard.
And @Muaddib : you can use a single WAN interface on each of the VyOS boxes.
The benefit of the ready to use products are their hardware and software.
They have automated systems that does all the configurations in a very simple manner.
If you are up for the labor it’s not too much of a hassle to put this setup into action.
Once the Wireguard links are up it’s a very simple matter of routing.

In any case you will need hardware and as long as you don’t need to exceed 100Mbps any routing hardware can do that.
There are other vendors who has Wireguard support and routing capabilities that you can use in a much lower price.
With PBR you can define rules that will send the traffic from the LAN network over the Wireguard interfaces while either load balancing each packet per in a RR fashion or each 5-tuple via the same interface.

An example for such a setup with a sat Router and a SDWAN router could be something like this:

This works with a per packet load balancing and since there are three Wireguard tunnels it would create 3 udp flows/streams over the SAT connection.
If you would create let say 20 tunnels you would be able to maximize the speed per stream ie 4 Mbps per stream * 20 about 80Mbps max.
It’s pretty simple to setup in VyOS and can also be configured on bare Linux such as Debian/Ubuntu/Alma/CentOS/Rocky/Oracle.

By the way squid 4.x is now EOL so the VyOS project should upgrade to a newer version.

If you need to intercept all TCP and UDP connections to make the VyOS device as a MTU and other network layer “fixer” between the LAN and the WAN segments of the network you could use a tiny proxy for that.
I have used the goproxy from snail007 for a whille for something similar:
https://github.com/snail007/goproxy

By the way, their OSes are not a Rebranded OpenWRT…
If you would have seen their OS from the inside you would actually go and buy their product…
If someone have done a good job he should be paid for it.
The main issue is that if someone did a bad job and you paid for it you might not get a fix later on if you won’t pay again.
There is a big difference between paying for a product and a service.
Today many companies make you pay for a service since a specific product price would not cover the ongoing work on the product (which is a requirement).

There are exceptional products that do not require any service payment because of their simple mechanics.
There are many simple switches that do not require any upgrades and they just work for ages.
There are many routing switching platforms that do not require any upgrades for years if configured properly unless you would need “new featues”.

Great thanks, elico! I’ll have a deep look into it, since I’m not a big fan of Linux)

@elico: Well I do understand that most commercial companies do steal or lend opensource products and rebrands them.

Not the first time I have seen a OpenWRT web-gui where all existence of “OpenWRT” have been replaced by the propertiary name.

Also the examples you provided with 3 paths and per-packet-loadbalancing will fail if the latency between the paths is too large as in the packets arrives out-of-order by more than 32-100 packets depending on the destination host tcpip-stack settings.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.