I managed to create an Ethernet bridge group assign it an ip address, and amazingly it’s passing data !
Since I want to use this to pass 250-600 MB/sec, I need to minimize CPU consumption. Right now it uses 50-60% of the CPU for 110 MB/sec.
questions.
I used the OVA file, but it only would take flexible or E1000 NICS. since this appliance appears to have the tools already loaded, I would like to use the VMX2/3 NICS to minimize CPU use. can this be done?
-I found this was as simple as switching the OS type in VCentre Client from other Linux 32bit to Other 2.6.x Linux (32bit.) I was able to add VMXNET 3 NICs and they were detected w/o a problem. It appears to have saved me 10-15% CPU, as I am in the 40%-50% range now maxing out a 1Gbit Ethernet link.
If I add some more vcpu’s, does this scale well over multiple CPU’s? I will test of course, but I was worried the bridging might be single thread.
login vyos:vyos
configure
edit int eth eth0
set bridge-group bridge br0
exit
edit interfaces ethernet eth1
set bridge-group bridge br0
exit
set system host-name vyos-rtr
set interfaces bridge 'br0'
set interfaces bridge br0 address '192.168.110.51/24'
commit
save
I documented this time, as when I rebooted the VM, well lets say that I didn’t know I had to use save after commit …
password not yet set, and no personal info to remove.
vyos@vyos-rtr:~$ configure
[edit]
vyos@vyos-rtr# show
interfaces {
bridge br0 {
address 192.168.110.51/24
aging 300
hello-time 2
max-age 20
priority 0
stp false
}
ethernet eth0 {
bridge-group {
bridge br0
}
duplex auto
hw-id 00:50:56:a9:e7:b6
smp_affinity auto
speed auto
}
ethernet eth1 {
bridge-group {
bridge br0
}
duplex auto
:
interfaces {
bridge br0 {
address 192.168.110.51/24
aging 300
hello-time 2
max-age 20
priority 0
stp false
}
ethernet eth0 {
bridge-group {
bridge br0
}
duplex auto
hw-id 00:50:56:a9:e7:b6
smp_affinity auto
speed auto
}
ethernet eth1 {
bridge-group {
bridge br0
}
duplex auto
hw-id 00:50:56:a9:84:82
smp_affinity auto
speed auto
}
loopback lo {
}
}
service {
ssh {
port 22
}
}
system {
config-management {
commit-revisions 20
}
console {
device ttyS0 {
speed 9600
}
}
host-name vyos-rtr
login {
user vyos {
authentication {
encrypted-password $1$lujeCDPs$YfFI/4Ilfh2RTVuH1C/Dc/
}
level admin
}
}
ntp {
server 0.pool.ntp.org {
}
server 1.pool.ntp.org {
}
server 2.pool.ntp.org {
}
}
package {
auto-sync 1
repository community {
components main
distribution hydrogen
password ""
url http://packages.vyos.net/vyos
username ""
}
}
syslog {
global {
facility all {
level notice
}
facility protocols {
level debug
}
}
}
time-zone UTC
}
I appreciate the feedback, but I really do need to use VMware as keeping my fingers in that product range is a lot of the whole point of what I am doing.
I am seeing some strange performance, an asymetrical result.
I am connection my test desktop to my ESXi server via 10Gbit Ethernet NIC. Using iperf, I see almost exactly 1Gb/sec in one direction, and 5.5-7Gb/sec in the other when I pass though this VYOS VM acting as an ethernet bridge. It is repeatable everytime. I know its the VYOS switch because if I move the VM to the vswitch that contains the physical 10G NIC in it, I get high performance in both directions.
running the iperf Client on the Desktop, and the iperf server on the ESXi Hosted server I get 5+Gbit/sec
however when I run the Client on the ESXi hosted VM and the server on the Desktop, I only get 1Gbit/sec
I know it’s late but I found this post browsing the interwebs, i signed up just to answer this post. I am pretty sure the speeds are so high when the client and server are on the same physical machine because the traffic never leaves the vSwitch. It’s just changing memory addresses, there’s no transfer time since the packets never hit the network. If follows that you speed is limited by the speed of your RAM is.