Technical difference between interface and VIF

On a virtual instance of VyOs you can either:

  • use a single interface connected on a TAP of the hypervisor set as a trunk, and use VIF in Vyos
  • create on the hypervisor one TAP per VLAN and connect them on separate VyOs interfaces

you end up with eth0 / eth1 / eth2 or eth0 vif 1 / eth0 vif 2 / eth0 vif 3.

Both provide the same functionality but are there technical differences, on how the kernel will handle the traffic for example ? or buffering, queuing, processor pining, ram usage, etc… ?

I do not believe there is any difference once virtualized, maybe if you passed through the NIC’s directly?.
Therefore I am not 100% able to answer the question…

However, I have mine set over 3 nics which are vlanned inside VyOS only.
The 3 nics are not tagged in the VMs NIC settings, so I use the VIF option as you mentioned and I have about 8 or 9 VIFs over the 3 vNIC, this gives me ultimate portability or ability to recreate the image anywhere anytime with absolute ease by not having to bother with the numbering or configuring the VM’s NIC’s vlans, as its done in the config file.

Thats my 2c why I prefer the VIF method.

The difference is simple. An interface directly represents a hardware NIC, a VIF is just what it says, virtual. Since VIFs are virtual, they can exist for different reasons and all tie themselves to the same hardware NIC.

The most common usage for VIFs is for VLANs. A hardware NIC can only tag traffic for a single VLAN, so how would you process traffic for two different VLANs if you only had hardware NICs? You’d need two hardware NICs. That’s where VIFs come in. You can create two VIFs for two VLANs and have both use the same hardware NIC.

There are other more advanced use cases for VIFs, but I won’t get into it here.

Oh an don’t get confused by the term VIF just because its inside a virtual machine. VIF is virtual to VyOS not to the virtual machine. Remember that software (for the majority) doesn’t know its running in a virtual environment (that’s sort of the point of virtualization). The hardware interface in VyOS is what represent the virtual machine’s virtual interface.

@jscarle, inside the VM there is no hardware NIC, both types are virtual inside a VM in most use cases, not counting the edge case of a passthrough card.

In reality the biggest thing inside the VM in both of the scenarios above is where you are doing the VLAN tagging, either inside VyOS/VIF or via adding multiple NIC to the VM that are tagged and the third way, your NIC(s) in the host is tagged already so your hypervisor switch and any NIC using the switch does not need any VLAN tagging either, ie Windows Server Teaming with Hyper-V.

I guess that these are all the easy part, we appear to all on the same page in terms of the different ways to add interfaces to a VM, the question was more based on the technical differences between different methods of attaching the interfaces, which is much more complex.

Yes @blackhole is right here.

For example if the traffic for all VIF under eth0 is handled by the same vCPU, while the traffic of different interface (eth0 / eth1) could be handled by different vCPU.

This would make a huge difference.

@blackhole, inside a VM you are correct that there is no “hardware” NIC, but the interface is still tied to the Virtual Machine’s Virtual NIC whilst the VIF is not.