CloudStack: The given command does not exist or it is not available for user

So I was working on CloudStack today and I build new packages from the 4.2 branch to test some new things for the Ceph integration.

After installing the new packages and restarting my management server I wasn’t able to log on anymore. This is what I got:

The given command does not exist or it is not available for user

It took me quite some time to figure out what was going on, but after turning on MySQL logging it turned out that I was missing a column in a database. This setup is a dev setup where I build packages on a daily basis and perform a lot of database changes manually.

The problem was that my database was out of sync with what the code expected it to be. When you go from version A to B the management server will upgrade the database accordingly, but I went from version B to B, which did have some database changes, but weren’t taken care of by the DatabaseUpgradeChecker, which makes perfectly sense since this is a dev server.

So should you encounter this message at some point, turn on MySQL query logging and see the queries it tries to do. You’ll probably see that one of them is failing.

This causes the whole management server not to start properly.

A quick note on running CloudStack with RBD on Ubuntu 12.04

When you want to use Ceph as Primary Storage in Apache CloudStack you need a recent version of libvirt with RBD storage pool support enabled.

If you want to use Ubuntu 12.04 LTS (Precise) you would need to manually compile libvirt since the default libvirt version doesn’t include RBD storage pool support.

But not any more! Ubuntu has their Cloud Archive which is aimed at OpenStack, but that doesn’t matter, we just want a newer version of libvirt with RBD storage pool support.

So, add this PPA and a Apt source for Ceph and you can use RBD with CloudStack without compiling anything!

$ sudo apt-get install ubuntu-cloud-keyring
$ echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main | sudo tee /etc/apt/sources.list.d/cloud-archive.list
$ wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
$ echo deb http://eu.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt-get install cloudstack-agent

Voila, you now have all the packages you need to run a CloudStack agent with RBD support.

CloudStack: Zone X is is not ready to launch console proxy yet

As you might know, I’m a committer in the Apache CloudStack project and I work on it on a daily basis.

I have a couple of development setups running and I upgraded one of them (where I do all my Ceph development) from 4.0 to 4.1 (isn’t out yet) and suddenly I got this message in my logs:

Zone 1 is not ready to launch console proxy yet

That log line didn’t tell me that much, so I started digging through the code as of WHY my Zone wasn’t ready, since it was working under 4.0.

It turns out that my global setting “secondary.storage.vm” wasn’t set to true and that caused my KVM zone not to work.

This setting can’t be changed through the Web UI (not sure why) and I had to change it in the database instead. After setting it to “true” my System VMs began to start again and all worked just fine.

It seems this was legacy on my end since the upgrade process doesn’t touch this setting at all. I’m adding some extra debugging to the code to make it a bit more clear as of WHY your zone isn’t ready.

Should you ever encounter this one, verify this setting.

Enhanced RBD support for CloudStack 4.2

About 1 hour ago the new storage subsystem got merged into the master branch of CloudStack. That is wonderful news for all you out there who want to use features like snapshotting with RBD in CloudStack.

In pre-4.2 CloudStack a snapshot was the same as a backup. As soon as you created a snapshot it would also copy that snapshot to the secondary storage. This could not only lead to high network utilization when talking about 1TB RBD volumes, but it also caused problems with the underlying ‘qemu-img’ tool. To make a long story short: Snapshots with RBD just wouldn’t work in CloudStack 4.0 or 4.1 without resorting to dirty hacking. Which we didn’t.

The new storage subsystem separates the backup and snapshot process. Snapshots are handled by the primary storage and they can be copied to the ‘backup storage’ on request. This allows is to use the full snapshot potential of RBD.

I was waiting for the storage subsystem to be merged into the master branch before I could start working on this. About two weeks ago I already wrote a small function spec in CloudStack’s wiki to describe what has to be done.

A couple of choices still have to be made. Traditionally we could do everything through libvirt and ‘qemu-img’, but from what I can see now we’ll run into some trouble. We might have to go through the process of wrapping librbd into a Java library to get it all done, but I’m not completely positive about that. Some patches for libvirt(-java) could probably also do the job, but it would take a lot of time and work to get those upstream and into the repositories. The goal is to have this new RBD code work natively on a Ubuntu 13.04 system.

The expectation is that CloudStack 4.2 will be released mid-July this year, but if you are a daredevil you can always track the master branch and play around with that.

I’ll post updates on the cloudstack-dev list on a regular base about the progress, but you can also watch the master branch and search for commits with ‘RBD’ in the message.

Ceph distributed storage with CloudStack

As we are nearing the CloudStack 4.0 release I figured it was time I’d write something about the Ceph integration in CloudStack 4.0

In the beginning of this year we (my company) decided we wanted to use CloudStack for our cloud product, but we also wanted to use Ceph for the storage. CloudStack lacked the support for Ceph, so I decided I’d implement that.

Fast forward 4 months, a long flight to California, becoming a committer and PPMC member of CloudStack, various patches for libvirt(-java) and here we are, 25 September 2012!

RBD, the RADOS Block Device from Ceph enables you to stripe disks for (virtual) machines across your Ceph cluster. This not only gives high performance, it gives you virtually unlimited scalability (without downtime!) and redundancy. Something your NetApp, EMC or EqualLogic SAN can’t give you.

Although I’m a very big fan of Nexenta (use it a lot) it also has it’s limitations. A SAS environment won’t keep scaling for ever and SAS is expensive! Yes, ZFS is truly awesome, but you can’t compare it to the distributed powers Ceph has.

The current implementation of RBD in CloudStack is for Primary Storage only, but that’s mainly what you want, it has a couple of limitations though:

  • You still need either NFS or Local Storage for your System VMs
  • Snapshotting isn’t enabled (see below!)
  • It only works with KVM (Using RBD in Qemu)

If you are happy with that you’ll able to allocate hundreds of TB’s to your CloudStack cluster like it was nothing.

What do you need to use RBD for Primary Storage?

  • CloudStack 4.0 (RC2 is out now)
  • Hypervisors with Ubuntu 12.04.1
  • librbd and librados on your hypervisors
  • Libvirt 0.10.0 (Needs manual installation)
  • Qemu compiled with RBD enabled

There is no need for special configuration on your Hypervisor, that’s all controlled by the Management Server. I’d however recommend that you test the Ceph connectivity first:

rbd -m <monitor address> –user <cephx id> –key <cephx key> ls

If that works you can go ahead and add the RBD Primary Storage pool to your CloudStack cluster. It should be there when adding a new storage pool.

It behaves like any storage pool in CloudStack, except the fact that it is running on the next generation of storage 🙂

About the snapshots, this will be implemented in a later version, probably 4.2. It mainly has to do with the way how CloudStack currently handles snapshots. A major overhaul of the storage code is planned and as part of that I’ll implement snapshotting.

Testing is needed! So if you have the time, please test and report back!

You can find me on the Ceph and CloudStack IRC channels and mailinglists, feel free to contact me. Remember that I’m in GMT +2 (Netherlands).