So I am looking at a Virtualised environment, VMWare 5.5 esxi
Currently i have 10G core switching in place and my hosts have 4 x 10G nic’s
I am using Centos 6.5 and Windows 7 2008
I have done some initial testing centos → centos and I can push sustained around 9Gb/s on the vmxnet3 nic emulation. this is with iperf not routing, but I am not expecting an order of magnitude difference !
I was hoping to use a packaged firewall/router solution but i have run into some limitations
RouterOS … well no support for vmxnet3 nic, limited to 1G through put and seems to be limited to 1 core …
vyatta … looked good until I realized that VC is dead and I hate to think what brocade are going to charge … they started to talk about per core licensing
So eventually with some digging I got to here.
i don’t think routing 10G in a VM should be unreasonable… I would be happy with 5Gb/s anything more than then 1Gb/s
I don’t want to build it on centos … means I have to get zebra / quagga or bird running and well …
my hope is that vyott might be able to do it. I tried the old VC and got 1Gb/s with the vmxnet 3 driver.
My guess/hope is once the base it brought up to latest kernel and the VMWare driver is updated their might be a chance. I am actually liking the vyatta interface. infact more than the routeros one
A
EDIT:
I have done some quick testing and it seems like I can get an aggregate throughput of around 6.7Gb/s with a straight centos 6.5 image 2 nic vmxnet3 … this is the router. from server A to router 9.6Gb/s and server B to router 9.6Gb/s
It is not limited to anything, at least, not intentionally.
I’d like to see test details. For the fair test you need to use an external traffic generator and receiver, preferrably IXIA/Spirent/Xena device, but a reasonably modern server is fine too.
just a sanity check - can you confirm what driver was loaded on vyos? sudo ethtool -i eth0 - should show vmxnet3. I remember vmxnet is 1GBs while vmxnet is 10Gbs. In my tests with iperf I have not seen any difference between centos VM/vyatta with vmxnet3, showed ~9.8Gbs, but it was between vyatta and other VM running on the same ESXi host.
Generally speaking, PPS, not throughput, is what will ultimately kill an x86-based router. So, the question is 10G at what PPS? Is this straight-routing, or are firewalls/policies involved?
10G routing on any straight x86-based router is probably a bad idea for mission critical environments, as things stand today.