As we are nearing the CloudStack 4.0 release I figured it was time I’d write something about the Ceph integration in CloudStack 4.0
In the beginning of this year we (my company) decided we wanted to use CloudStack for our cloud product, but we also wanted to use Ceph for the storage. CloudStack lacked the support for Ceph, so I decided I’d implement that.
Fast forward 4 months, a long flight to California, becoming a committer and PPMC member of CloudStack, various patches for libvirt(-java) and here we are, 25 September 2012!
RBD, the RADOS Block Device from Ceph enables you to stripe disks for (virtual) machines across your Ceph cluster. This not only gives high performance, it gives you virtually unlimited scalability (without downtime!) and redundancy. Something your NetApp, EMC or EqualLogic SAN can’t give you.
Although I’m a very big fan of Nexenta (use it a lot) it also has it’s limitations. A SAS environment won’t keep scaling for ever and SAS is expensive! Yes, ZFS is truly awesome, but you can’t compare it to the distributed powers Ceph has.
The current implementation of RBD in CloudStack is for Primary Storage only, but that’s mainly what you want, it has a couple of limitations though:
- You still need either NFS or Local Storage for your System VMs
- Snapshotting isn’t enabled (see below!)
- It only works with KVM (Using RBD in Qemu)
If you are happy with that you’ll able to allocate hundreds of TB’s to your CloudStack cluster like it was nothing.
What do you need to use RBD for Primary Storage?
- CloudStack 4.0 (RC2 is out now)
- Hypervisors with Ubuntu 12.04.1
- librbd and librados on your hypervisors
- Libvirt 0.10.0 (Needs manual installation)
- Qemu compiled with RBD enabled
There is no need for special configuration on your Hypervisor, that’s all controlled by the Management Server. I’d however recommend that you test the Ceph connectivity first:
rbd -m <monitor address> –user <cephx id> –key <cephx key> ls
If that works you can go ahead and add the RBD Primary Storage pool to your CloudStack cluster. It should be there when adding a new storage pool.
It behaves like any storage pool in CloudStack, except the fact that it is running on the next generation of storage 🙂
About the snapshots, this will be implemented in a later version, probably 4.2. It mainly has to do with the way how CloudStack currently handles snapshots. A major overhaul of the storage code is planned and as part of that I’ll implement snapshotting.
Testing is needed! So if you have the time, please test and report back!
You can find me on the Ceph and CloudStack IRC channels and mailinglists, feel free to contact me. Remember that I’m in GMT +2 (Netherlands).