20-40Gbps aggregate capabilities on modern workstations virtualized?

I couldnt quick find a topic that matched my question without going way back in time and the hardware is much less capable.

I’m trying to find some examples of ultra-modern workstation hardware, ie Intel 11th gen CPUs handling 20Gbps aggregate routing, or more.

My ‘pipe dream’ build here is a switch centric proxmox/kvm setup with 3 top-tier optiplex workstations with intel multiqueue NICs in a proxmox cluster. Then, if at all possible, 2 vyos VMs on each box able to handle 10Gbps full duplex each. that’s potentially 40Gbps aggregate. These would handle multiple full-table BGP peers on 10Gbps fiber uplinks.

I’m seeing examples of much older hardware doing ~36Gbps on bare metal but nothing showing VMs going that high.

What should I expect here? anyone got a link to any write-ups on VM performance? I prefer Proxmox/KVM but I could suffer ESXi if I had to.

Thanks.

Thoughts?

i recommend to do not use any virtualization and directly install vyos to your server
i using +4gbps - 1.8m pps traffic on my vyos with i9-9900k and i dont have cpu problem

I have a space constraint, or rather a cost of rent constraint. Going bare metal drastically reduces my options in the space I have available.

I prefer the virtualization model 100:1 over bare metal but I’ll be asking this box to do a lot more throughput than I’ve ever tried before.

The Problem with VM is that the limiting factor is the hypervisior NIC.
Example of KVM:
with VirtIO you can get up to 4-5Gbps
if you use DPDK (OVS) you could reach 8-9Gbps
for 10Gbps or more you need SR-IOV (DPDK) OR PCI-Passthrough

To get 40, 100 or 200+ GBPS the VM needs to be highly optimised and usually to get 100G or more also SmartNICs as Mellanox are required or you would just need an huge amount of CPUs which is sometimes impossible. Also you need to handle queuing and multi queuing and IO CPUs for the interfaces so it is not so easy ;-). the more interfaces you have, you also need to increase CPU.

Same is for ESXi more then 10G without SR-IOV or passthrough is impossible regardless of your CPU as the bottleneck is the virtual NIC and driver.

€: Same of course with container where you would need multus-cni as well