Apache CloudStack and MySQL 5.7

SQL Mode

Starting with MySQL 5.7 the default SQL mode is far more strict then it was before.

It now includes ONLY_FULL_GROUP_BY, STRICT_TRANS_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, and NO_ENGINE_SUBSTITUTION.

This can cause problems for applications which need other SQL modes. Apache CloudStack is one of these applications.

The best thing would be to modify the SQL queries executed by CloudStack, but that’s not that easy.

Changing the mode

Luckily the SQL mode can be changed in either the my.conf or as a session variable.

In the my.cnf one can add:

[mysqld]
sql_mode = 'STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'

Or modify the /etc/cloudstack/management/db.properties file to include this line:

db.cloud.url.params=prepStmtCacheSize=517&cachePrepStmts=true&sessionVariables=sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'

You should now be able to run a CloudStack management server on MySQL 5.7!

Future

In the future CloudStack should only be using SQL queries which comply with the new more strict SQL mode. In the meantine a issue and Pull Request have been created to track this situation.

ISC Kea DHCPv6 server

DHCPv6

In most situations StateLess Address AutoConfiguration (SLAAC) works just fine when you work with simple clients in a IPv6 network. But in other cases you want to assign pre-defined addresses or prefixes to clients and there DHCPv6 comes in to play.

While working on the IPv6 implementation for Apache CloudStack I found Kea, a DHCPv6 server from ISC.

DHCPv6 DUID

With IPv4 you could easily identify a client based on the MAC-address it send the DHCP request from. With IPv6 there is a DUID. The “DHCP Unique Identifier”. This is generated by the client and then used by the DHCPv6 server. A few possibilities the clients can choose from:

  • DUID-LL: DUID Based on Link-layer Address
  • DUID-LLT: Link-layer Address Plus Time
  • DUID-EN: Assigned by Vendor Based on Enterprise Number

While DUID seems nice, it can’t be dictated by the DHCPv6 server. The client generates the DUID itself and sends it towards the server. Not something you prefer if your are not in control of the clients.

In a cloud you are in control over the MAC-address, so that is what you want to use where possible. It can’t be spoofed by the client.

ISC Kea

Kea is a DHCPv4/DHCPv6 server being developed by the Internet Systems Consortium. It is a extensible and flexible DHCP server. Facebook uses it in their datacenters.

My goal was very simple. Set up Kea and see if I can use it to hand out an address to a client.

Configuration

I download the tarball and tested it with this configuration between two simple KVM VMs on my desktop.

{
    "Dhcp6": {
        "renew-timer": 1000,
        "rebind-timer": 2000,
        "preferred-lifetime": 3000,
        "valid-lifetime": 4000,
        "lease-database": {
            "type": "memfile",
            "persist": true,
            "name": "/tmp/kea-leases6.csv",
            "lfc-interval": 1800
        },
        "interfaces-config": {
            "interfaces": [ "eth1/2001:db8::1" ]
        },
        "mac-sources": ["duid"],
        "subnet6": [
            {
                "subnet": "2001:db8::/64",
                "id": 1024,
                "interface": "eth1",
                "pools": [
                    { "pool": "2001:db8::100-2001:db8::ffff" }
                ],
                "pd-pools": [
                    {
                        "prefix": "2001:db8:fff::",
                        "prefix-len": 48,
                        "delegated-len": 60
                    }
                ],
                "reservations": [
                    {
                        "hw-address": "52:54:00:d6:c2:a9",
                        "ip-addresses": [ "2001:db8::5054:ff:fed6:c2a9" ]
                    }
                ]
            }
        ]
    }
}

Starting Kea with this configuration was rather simple:

Starting Kea

$ kea-dhcp6 -c /etc/kea.json -d

Logs

When it starts you see some interesting bits in the log:

DHCP6_CONFIG_NEW_SUBNET a new subnet has been added to configuration: 2001:db8::/64 with params t1=1000, t2=2000, preferred-lifetime=3000, valid-lifetime=4000, rapid-commit is disabled
DHCPSRV_CFGMGR_ADD_SUBNET6 adding subnet 2001:db8::/64
HOSTS_CFG_ADD_HOST add the host for reservations: hwaddr=52:54:00:d6:c2:a9 ipv6_subnet_id=1024 hostname=(empty) ipv4_reservation=(no) ipv6_reservation0=2001:db8::5054:ff:fed6:c2a9
HOSTS_CFG_GET_ONE_SUBNET_ID_HWADDR_DUID get one host with IPv6 reservation for subnet id 1024, HWADDR hwtype=1 52:54:00:d6:c2:a9, DUID (no-duid)
HOSTS_CFG_GET_ALL_HWADDR_DUID get all hosts with reservations for HWADDR hwtype=1 52:54:00:d6:c2:a9 and DUID (no-duid)
HOSTS_CFG_GET_ALL_IDENTIFIER get all hosts with reservations using identifier: hwaddr=52:54:00:d6:c2:a9
HOSTS_CFG_GET_ALL_IDENTIFIER_COUNT using identifier hwaddr=52:54:00:d6:c2:a9, found 0 host(s)
HOSTS_CFG_GET_ONE_SUBNET_ID_HWADDR_DUID_NULL host not found using subnet id 1024, HW address hwtype=1 52:54:00:d6:c2:a9 and DUID (no-duid)
HOSTS_CFG_GET_ONE_SUBNET_ID_ADDRESS6 get one host with reservation for subnet id 1024 and including IPv6 address 2001:db8::5054:ff:fed6:c2a9
HOSTS_CFG_GET_ALL_SUBNET_ID_ADDRESS6 get all hosts with reservations for subnet id 1024 and IPv6 address 2001:db8::5054:ff:fed6:c2a9
HOSTS_CFG_GET_ALL_SUBNET_ID_ADDRESS6_COUNT using subnet id 1024 and address 2001:db8::5054:ff:fed6:c2a9, found 0 host(s)
HOSTS_CFG_GET_ONE_SUBNET_ID_ADDRESS6_NULL host not found using subnet id 1024 and address 2001:db8::5054:ff:fed6:c2a9
DHCPSRV_MEMFILE_DB opening memory file lease database: lfc-interval=1800 name=/tmp/kea-leases6.csv persist=true type=memfile universe=6
DHCPSRV_MEMFILE_LEASE_FILE_LOAD loading leases from file /tmp/kea-leases6.csv

You can see it has one reservation based on the MAC-address of the client which it handed out after it booted:

ALLOC_ENGINE_V6_HR_ADDR_GRANTED reserved address 2001:db8::5054:ff:fed6:c2a9 was assigned to client duid=[00:01:00:01:1e:47:7e:66:52:54:00:d6:c2:a9], tid=0xe7899a

Ubuntu client

The client was a simple Ubuntu 14.04 client with this network configuration:

auto eth0
iface eth0 inet dhcp
iface eth0 inet6 dhcp

And indeed, it obtained the correct address:

root@ubuntu1404:~# ip addr show dev eth0
2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:d6:c2:a9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.100/24 brd 192.168.100.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:db8::5054:ff:fed6:c2a9/64 scope global deprecated dynamic 
       valid_lft 62sec preferred_lft 0sec
    inet6 fe80::5054:ff:fed6:c2a9/64 scope link 
       valid_lft forever preferred_lft forever
root@ubuntu1404:~#

Lease database

Kea can store the leases in a CSV file or MySQL database if you want. In this test I used /tmp/kea-leases6.csv as a CSV file to store the leases in.

In production a MySQL database is probably easier to use, but for the test CSV worked just fine.

Installing and testing NixOS

NixOS

NixOS is a minimal and flexible Linux distribution which doesn’t use any of the existing package manager.

NixOS is a Linux distribution with a unique approach to package and configuration management. Built on top of the Nix package manager, it is completely declarative, makes upgrading systems reliable, and has many other advantages.

I wanted to test NixOS and see if it could be a candidate for a very minimal KVM hypervisor running just Qemu, libvirt and Apache CloudStack.

With this post I just wanted to share how you can quickly install NixOS inside a VirtualBox VM.

VirtualBox

On my desktop and laptop I usually use VirtualBox to quickly test something inside Virtual Machines. In this case I downloaded the NixOS minimal 64-bit ISO and created a VM:

  • 1024MB of memory
  • 8GB SATA disk
  • NixOS ISO attached

Installation

After you start the VM it will boot from the ISO. You will then find yourself in a root prompt saying just nixos.

The first step is to format your disk and mount it under /mnt.

parted /dev/sda mklabel msdos
parted /dev/sda mkpart primary 0% 100%
mkfs.xfs /dev/sda1
mount /dev/sda1 /mnt

If you have that done you can run:

nixos-generate-config

This will generate /mnt/etc/nixos/configuration.nix from where you can configure your OS.

This is what I used as my configuration:

{ config, pkgs, ... }:

{
  imports = [
      ./hardware-configuration.nix
    ];

  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  boot.loader.grub.device = "/dev/sda";

  boot.kernelPackages = pkgs.linuxPackages_4_1;

  time.timeZone = "Europe/Amsterdam";

  networking.firewall.enable = false;

  environment.systemPackages = with pkgs; [
    wget git screen ceph
  ];

  services.openssh.enable = true;
  services.openssh.permitRootLogin = "yes";

  virtualisation.libvirtd.enable = true;
  virtualisation.libvirtd.extraOptions = ["-l"];
  virtualisation.libvirtd.extraConfig = "listen_tls = 0\nlisten_tcp = 1";

  system.stateVersion = "15.09";
}

A minimal installation with just OpenSSH and libvirt installed.

Now you can actually install NixOS:

nixos-install

After a few minutes you will be prompted for a root-password and that’s it!

Reboot and you have a running NixOS installation 🙂

Rebuilding libvirt under CentOS 7.1 with RBD storage pool support

If you want to use CentOS 7.1 for your hypervisors with Apache CloudStack and Ceph’s RBD as Primary Storage you need to rebuild libvirt.

CloudStack requires libvirt to be built with RBD storage pool support. It uses libvirt to manage RBD volumes. By default libvirt under CentOS is not built with this support. (On Ubuntu it is btw).

Rebuilding from source

First we need to install a couple of packages:

$ yum install -y rpm-build gcc make ceph-devel

Now we need to download the sRPM:

$ wget http://vault.centos.org/centos/7.1.1503/os/Source/SPackages/libvirt-1.2.8-16.el7.src.rpm

Create a rpmbuild directory:

$ mkdir /root/rpmbuild

Now edit /root/.rpmmacros so that it contains:

%_topdir    /root/rpmbuild

Install the sRPM:

$ rpm -i libvirt-1.2.8-16.el7.src.rpm

Open the /root/rpmbuild/SPECS/libvirt.spec file and look for:

%else
    %define with_storage_rbd      0
%endif

Change this to:

%else
    %define with_storage_rbd      1
%endif

Now build the RPM:

$ cd /root/rpmbuild
$ rpmbuild -ba SPECS/libvirt.spec

After a couple of minutes you should have RPMs with RBD storage pool support enabled!

PowerDNS backend for a global RADOS Gateway namespace

At my hosting company PCextreme we are building a cloud offering based on Ceph and CloudStack. We call our cloud services Aurora.

Our cloud services are composed out of two components: Compute and Objects.

For our Aurora Objects service we use the RADOS Gateway from Ceph and we are using the Federated Config to create multiple regions.

At this moment we have one region o.auroraobjects.eu but we soon want to expand to multiple regions.

One of the things we/I wanted is a global namespace for all our regions: o.auroraobjects.com.

By design the RADOS Gateway will return a HTTP-redirect when you connect to the ‘wrong’ region for a specific bucket, but a HTTP-redirect causes extra TCP packets going over the wire causing additional and unneeded latency.

So I came up with the idea of using a custom PowerDNS backend to direct bucket traffic on DNS level.

Imagine having a bucket ceph in the region ‘eu’ and the global namespace o.auroraobjects.com.

Using my custom backend the PowerDNS server will respond with a CNAME pointing the user towards the right hostname:

wido@wido-laptop:~$ host ceph.o.auroraobjects.com ns1.auroraobjects.com
Using domain server:
Name: ns1.auroraobjects.com
Address: 2a00:f10:121:400:48c:2ff:fe00:e6b#53
Aliases: 

ceph.o.auroraobjects.com is an alias for ceph.o.auroraobjects.eu.
wido@wido-laptop:~$

As you can see it responded with a CNAME pointing towards ceph.o.auroraobjects.eu.

This allows us to create multiple regions (eu, us, asia, etc) but keep one global namespace to make it easy to consume for our end-users.

Users can create a bucket in the region they like, but they never have to worry about wich hostname to use. We take care of that.

This PowerDNS backend is in the Ceph master branch and can be installed as a WSGI application behind Apache.

I’ve put a small txt file online to show you:

As you can see, both URLs show you the same object.

Deploying the backend for PowerDNS is fairly simply, I recommend you read the README, but here are a few config snippets.

Apache VirtualHost


	ServerAdmin webmaster@localhost

	DocumentRoot /var/www
	
		Options FollowSymLinks
		AllowOverride None
	
	
		Options Indexes FollowSymLinks MultiViews
		AllowOverride None
		Order allow,deny
		allow from all
	

	ErrorLog ${APACHE_LOG_DIR}/error.log
	LogLevel warn
	CustomLog ${APACHE_LOG_DIR}/access.log combined

	WSGIScriptAlias / /var/www/pdns-backend-rgw.py

PowerDNS configuration

local-address=0.0.0.0
local-ipv6=::

cache-ttl=60
default-ttl=60
query-cache-ttl=60

launch=remote
remote-connection-string=http:url=http://localhost/dns

Note: You have to compile PowerDNS manually with –with-modules=remote –enable-remotebackend-http

Don’t forget to put a rgw-pdns.conf in /etc/ceph with the correct configuration.

This is still a work-in-progress on my side and I’ll probably make some commits in the coming months, but feedback is much appreciated!

SQL connection error after upgrade to CloudStack 4.3.0

I just upgraded a small cluster of mine from CloudStack 4.2.1 to 4.3.0 and after installing the packages on my Ubuntu system the management server wouldn’t start due to a SQL error:

2014-03-25 20:52:13,643 INFO  [c.c.u.d.T.Transaction] (main:null) Is Data Base High Availiability enabled? Ans : false
2014-03-25 20:52:13,736 ERROR [c.c.u.d.Merovingian2] (main:null) Unable to get a new db connection
java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/cloud?autoReconnect=true&prepStmtCacheSize=517&cachePrepStmts=true
	at java.sql.DriverManager.getConnection(DriverManager.java:635)
	at java.sql.DriverManager.getConnection(DriverManager.java:195)

I quickly remembered a licensing issue around JDBC which delayed 4.3.0 and I was right. The management server was missing the right JAR/package for the SQL connection.

A quick apt-get install fixed it:

$ sudo apt-get install libmysql-java

This should have been a dependency of the ‘cloudstack-management’ package, but that somehow slipped through. I already applied a patch in the master branch and I’ll make sure it gets into 4.3.1 and 4.4.0.

So if you are running Ubuntu and are upgrading to CloudStack 4.3.0 and run into this issue, simply install the package and it’s fixed.

CloudStack: The given command does not exist or it is not available for user

So I was working on CloudStack today and I build new packages from the 4.2 branch to test some new things for the Ceph integration.

After installing the new packages and restarting my management server I wasn’t able to log on anymore. This is what I got:

The given command does not exist or it is not available for user

It took me quite some time to figure out what was going on, but after turning on MySQL logging it turned out that I was missing a column in a database. This setup is a dev setup where I build packages on a daily basis and perform a lot of database changes manually.

The problem was that my database was out of sync with what the code expected it to be. When you go from version A to B the management server will upgrade the database accordingly, but I went from version B to B, which did have some database changes, but weren’t taken care of by the DatabaseUpgradeChecker, which makes perfectly sense since this is a dev server.

So should you encounter this message at some point, turn on MySQL query logging and see the queries it tries to do. You’ll probably see that one of them is failing.

This causes the whole management server not to start properly.

A quick note on running CloudStack with RBD on Ubuntu 12.04

When you want to use Ceph as Primary Storage in Apache CloudStack you need a recent version of libvirt with RBD storage pool support enabled.

If you want to use Ubuntu 12.04 LTS (Precise) you would need to manually compile libvirt since the default libvirt version doesn’t include RBD storage pool support.

But not any more! Ubuntu has their Cloud Archive which is aimed at OpenStack, but that doesn’t matter, we just want a newer version of libvirt with RBD storage pool support.

So, add this PPA and a Apt source for Ceph and you can use RBD with CloudStack without compiling anything!

$ sudo apt-get install ubuntu-cloud-keyring
$ echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main | sudo tee /etc/apt/sources.list.d/cloud-archive.list
$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
$ echo deb http://eu.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt-get install cloudstack-agent

Voila, you now have all the packages you need to run a CloudStack agent with RBD support.

CloudStack: Zone X is is not ready to launch console proxy yet

As you might know, I’m a committer in the Apache CloudStack project and I work on it on a daily basis.

I have a couple of development setups running and I upgraded one of them (where I do all my Ceph development) from 4.0 to 4.1 (isn’t out yet) and suddenly I got this message in my logs:

Zone 1 is not ready to launch console proxy yet

That log line didn’t tell me that much, so I started digging through the code as of WHY my Zone wasn’t ready, since it was working under 4.0.

It turns out that my global setting “secondary.storage.vm” wasn’t set to true and that caused my KVM zone not to work.

This setting can’t be changed through the Web UI (not sure why) and I had to change it in the database instead. After setting it to “true” my System VMs began to start again and all worked just fine.

It seems this was legacy on my end since the upgrade process doesn’t touch this setting at all. I’m adding some extra debugging to the code to make it a bit more clear as of WHY your zone isn’t ready.

Should you ever encounter this one, verify this setting.

Enhanced RBD support for CloudStack 4.2

About 1 hour ago the new storage subsystem got merged into the master branch of CloudStack. That is wonderful news for all you out there who want to use features like snapshotting with RBD in CloudStack.

In pre-4.2 CloudStack a snapshot was the same as a backup. As soon as you created a snapshot it would also copy that snapshot to the secondary storage. This could not only lead to high network utilization when talking about 1TB RBD volumes, but it also caused problems with the underlying ‘qemu-img’ tool. To make a long story short: Snapshots with RBD just wouldn’t work in CloudStack 4.0 or 4.1 without resorting to dirty hacking. Which we didn’t.

The new storage subsystem separates the backup and snapshot process. Snapshots are handled by the primary storage and they can be copied to the ‘backup storage’ on request. This allows is to use the full snapshot potential of RBD.

I was waiting for the storage subsystem to be merged into the master branch before I could start working on this. About two weeks ago I already wrote a small function spec in CloudStack’s wiki to describe what has to be done.

A couple of choices still have to be made. Traditionally we could do everything through libvirt and ‘qemu-img’, but from what I can see now we’ll run into some trouble. We might have to go through the process of wrapping librbd into a Java library to get it all done, but I’m not completely positive about that. Some patches for libvirt(-java) could probably also do the job, but it would take a lot of time and work to get those upstream and into the repositories. The goal is to have this new RBD code work natively on a Ubuntu 13.04 system.

The expectation is that CloudStack 4.2 will be released mid-July this year, but if you are a daredevil you can always track the master branch and play around with that.

I’ll post updates on the cloudstack-dev list on a regular base about the progress, but you can also watch the master branch and search for commits with ‘RBD’ in the message.