Running headless VirtualBox inside Nested KVM

For the Ceph training at 42on I use VirtualBox to build Virtual Machines. This is because they work under MacOS, Windows and Linux.

For the internal Git at 42on we use Gitlab and I wanted to use Gitlab’s CI to build my Virtual Machines automatically.

As we don’t have any physical hardware at 42on (everything runs in the cloud) I wanted to see if I could run VirtualBox Headless inside a VM with Nested KVM enabled.

Nested KVM

The first thing I checked was if my KVM Virtual Machine actually supported Nested KVM. This can be verified with the kvm-ok command under Ubuntu:

root@glrun01:~# kvm-ok 
INFO: /dev/kvm exists
KVM acceleration can be used
root@glrun01:~#

Now that’s verified I tried to install VirtualBox.

VirtualBox

Installing VirtualBox is straight forward. Just add the repository and install the packages. Don’t forget to reboot afterwards to make sure all kernel modules are loaded and properly installed.

apt-get install virtualbox

VirtualBox Extension Pack

The trick to get everything working properly is to install Oracle’s VirtualBox Extension Pack. It took me a while to figure out that I need to install it manually. It wasn’t done by default after install.

You need to download the pack and install it using the VBoxManage command.

wget http://download.virtualbox.org/virtualbox/5.0.24/Oracle_VM_VirtualBox_Extension_Pack-5.0.24.vbox-extpack
vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.0.24.vbox-extpack
vboxmanage list extpacks
vboxmanage setproperty vrdeextpack "Oracle VM VirtualBox Extension Pack"

With that installed and configured I rebooted the machine again just to be sure.

It works!

With that it actually worked. The VirtualBox VMs can now be built inside a Nested KVM machine controlled by Gitlab’s CI 🙂

Bonding, VLAN and bridging under Ubuntu 10.04

The last few weeks I spend a lot of time upgrading Ubuntu 9.10 systems to 10.04, these systems are SuperMicro blade systems with 2 NIC’s per blade.

By using bonding (active-backup) we combine eth0 and eth1 to bond0. On top of the bond we use 8021q VLAN’s, so we have devices like bond0.100, bond0.303, etc, etc.

Those devices then are used to create bridges like vlanbr100 and vlanbr303 to give our KVM Virtual Machines access to our network.

This would result in a setup like:

eth0 -> |
        | -> bond0 -> bond0.100 -> vlanbr100
eth1 -> |          -> bond0.303 -> vlanbr303  

Under Ubuntu 9.10 and before this setup worked fine, but under Ubuntu 10.04 we noticed that the network inside the virtual machine wouldn’t work that well. The ARP reply (is-at) would be dropped at the bridge and didn’t get transferred to the Virtual Machine.

If I’d set the arp manually inside the VM, everything started to work, but ofcourse, that was not the way it was meant to be.

After hours of searching I found a Debian bugreport, that was exactly my problem!

It seems that Ubuntu’s ifenslave-2.6 package (1.10-14) under 10.04 has exactly the same bug. Backporting the ifenslave package from 10.10 (1.10-15) fixed everything for me, my virtual machines would start to work again.

I created a bug report for this at Ubuntu, hopefully they will fix it in 10.04 rather quickly.

For now, if you have the same problem, just backport the ifenslave package from 10.10 to 10.04